00:00:00.001 Started by upstream project "autotest-per-patch" build number 127216 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.156 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.157 The recommended git tool is: git 00:00:00.157 using credential 00000000-0000-0000-0000-000000000002 00:00:00.159 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.186 Fetching changes from the remote Git repository 00:00:00.187 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.215 Using shallow fetch with depth 1 00:00:00.215 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.215 > git --version # timeout=10 00:00:00.233 > git --version # 'git version 2.39.2' 00:00:00.233 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.245 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.245 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.284 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.298 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.312 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:07.312 > git config core.sparsecheckout # timeout=10 00:00:07.324 > git read-tree -mu HEAD # timeout=10 00:00:07.341 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:07.365 Commit message: "packer: Add bios builder" 00:00:07.365 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:07.455 [Pipeline] Start of Pipeline 00:00:07.471 [Pipeline] library 00:00:07.473 Loading library shm_lib@master 00:00:08.587 Library shm_lib@master is cached. Copying from home. 00:00:08.618 [Pipeline] node 00:00:08.751 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:08.754 [Pipeline] { 00:00:08.894 [Pipeline] catchError 00:00:08.899 [Pipeline] { 00:00:08.941 [Pipeline] wrap 00:00:08.959 [Pipeline] { 00:00:08.975 [Pipeline] stage 00:00:08.980 [Pipeline] { (Prologue) 00:00:09.001 [Pipeline] echo 00:00:09.003 Node: VM-host-WFP1 00:00:09.009 [Pipeline] cleanWs 00:00:09.015 [WS-CLEANUP] Deleting project workspace... 00:00:09.015 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.022 [WS-CLEANUP] done 00:00:09.204 [Pipeline] setCustomBuildProperty 00:00:09.265 [Pipeline] httpRequest 00:00:09.282 [Pipeline] echo 00:00:09.284 Sorcerer 10.211.164.101 is alive 00:00:09.291 [Pipeline] httpRequest 00:00:09.295 HttpMethod: GET 00:00:09.295 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:09.296 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:09.301 Response Code: HTTP/1.1 200 OK 00:00:09.302 Success: Status code 200 is in the accepted range: 200,404 00:00:09.302 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:11.518 [Pipeline] sh 00:00:11.800 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:11.817 [Pipeline] httpRequest 00:00:11.847 [Pipeline] echo 00:00:11.850 Sorcerer 10.211.164.101 is alive 00:00:11.859 [Pipeline] httpRequest 00:00:11.863 HttpMethod: GET 00:00:11.864 URL: http://10.211.164.101/packages/spdk_1beb86cd6a4baedf74c720d1dc8e6044993864ee.tar.gz 00:00:11.864 Sending request to url: http://10.211.164.101/packages/spdk_1beb86cd6a4baedf74c720d1dc8e6044993864ee.tar.gz 00:00:11.887 Response Code: HTTP/1.1 200 OK 00:00:11.887 Success: Status code 200 is in the accepted range: 200,404 00:00:11.888 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_1beb86cd6a4baedf74c720d1dc8e6044993864ee.tar.gz 00:01:11.545 [Pipeline] sh 00:01:11.827 + tar --no-same-owner -xf spdk_1beb86cd6a4baedf74c720d1dc8e6044993864ee.tar.gz 00:01:14.367 [Pipeline] sh 00:01:14.642 + git -C spdk log --oneline -n5 00:01:14.642 1beb86cd6 lib/idxd: add descriptors for DIX generate 00:01:14.642 477912bde lib/accel: add spdk_accel_append_dix_generate/verify 00:01:14.642 325310f6a accel_perf: add support for DIX Generate/Verify 00:01:14.642 fcdc45f1b test/accel/dif: add DIX Generate/Verify suites 00:01:14.642 ae7704717 lib/accel: add DIX verify 00:01:14.660 [Pipeline] writeFile 00:01:14.676 [Pipeline] sh 00:01:14.956 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:14.968 [Pipeline] sh 00:01:15.252 + cat autorun-spdk.conf 00:01:15.252 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.252 SPDK_TEST_NVME=1 00:01:15.252 SPDK_TEST_FTL=1 00:01:15.252 SPDK_TEST_ISAL=1 00:01:15.252 SPDK_RUN_ASAN=1 00:01:15.252 SPDK_RUN_UBSAN=1 00:01:15.252 SPDK_TEST_XNVME=1 00:01:15.252 SPDK_TEST_NVME_FDP=1 00:01:15.252 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.259 RUN_NIGHTLY=0 00:01:15.261 [Pipeline] } 00:01:15.277 [Pipeline] // stage 00:01:15.293 [Pipeline] stage 00:01:15.294 [Pipeline] { (Run VM) 00:01:15.309 [Pipeline] sh 00:01:15.610 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:15.610 + echo 'Start stage prepare_nvme.sh' 00:01:15.610 Start stage prepare_nvme.sh 00:01:15.610 + [[ -n 6 ]] 00:01:15.610 + disk_prefix=ex6 00:01:15.610 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:01:15.610 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:01:15.610 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:01:15.610 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.610 ++ SPDK_TEST_NVME=1 00:01:15.610 ++ SPDK_TEST_FTL=1 00:01:15.610 ++ SPDK_TEST_ISAL=1 00:01:15.610 ++ SPDK_RUN_ASAN=1 00:01:15.610 ++ SPDK_RUN_UBSAN=1 00:01:15.610 ++ SPDK_TEST_XNVME=1 00:01:15.610 ++ SPDK_TEST_NVME_FDP=1 00:01:15.610 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.610 ++ RUN_NIGHTLY=0 00:01:15.610 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:01:15.610 + nvme_files=() 00:01:15.610 + declare -A nvme_files 00:01:15.610 + backend_dir=/var/lib/libvirt/images/backends 00:01:15.610 + nvme_files['nvme.img']=5G 00:01:15.610 + nvme_files['nvme-cmb.img']=5G 00:01:15.610 + nvme_files['nvme-multi0.img']=4G 00:01:15.610 + nvme_files['nvme-multi1.img']=4G 00:01:15.610 + nvme_files['nvme-multi2.img']=4G 00:01:15.611 + nvme_files['nvme-openstack.img']=8G 00:01:15.611 + nvme_files['nvme-zns.img']=5G 00:01:15.611 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:15.611 + (( SPDK_TEST_FTL == 1 )) 00:01:15.611 + nvme_files["nvme-ftl.img"]=6G 00:01:15.611 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:15.611 + nvme_files["nvme-fdp.img"]=1G 00:01:15.611 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:15.611 + for nvme in "${!nvme_files[@]}" 00:01:15.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:15.611 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.611 + for nvme in "${!nvme_files[@]}" 00:01:15.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:01:15.611 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:15.611 + for nvme in "${!nvme_files[@]}" 00:01:15.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:15.611 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.611 + for nvme in "${!nvme_files[@]}" 00:01:15.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:15.611 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:15.611 + for nvme in "${!nvme_files[@]}" 00:01:15.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:15.611 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.611 + for nvme in "${!nvme_files[@]}" 00:01:15.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:15.611 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.611 + for nvme in "${!nvme_files[@]}" 00:01:15.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:15.870 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.870 + for nvme in "${!nvme_files[@]}" 00:01:15.870 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:01:15.870 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:15.870 + for nvme in "${!nvme_files[@]}" 00:01:15.870 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:15.870 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.870 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:15.870 + echo 'End stage prepare_nvme.sh' 00:01:15.870 End stage prepare_nvme.sh 00:01:15.882 [Pipeline] sh 00:01:16.164 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:16.164 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:16.164 00:01:16.164 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:01:16.164 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:01:16.164 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:01:16.164 HELP=0 00:01:16.164 DRY_RUN=0 00:01:16.164 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:01:16.164 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:16.164 NVME_AUTO_CREATE=0 00:01:16.164 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:01:16.164 NVME_CMB=,,,, 00:01:16.164 NVME_PMR=,,,, 00:01:16.164 NVME_ZNS=,,,, 00:01:16.164 NVME_MS=true,,,, 00:01:16.164 NVME_FDP=,,,on, 00:01:16.164 SPDK_VAGRANT_DISTRO=fedora38 00:01:16.164 SPDK_VAGRANT_VMCPU=10 00:01:16.164 SPDK_VAGRANT_VMRAM=12288 00:01:16.164 SPDK_VAGRANT_PROVIDER=libvirt 00:01:16.164 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:16.164 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:16.164 SPDK_OPENSTACK_NETWORK=0 00:01:16.164 VAGRANT_PACKAGE_BOX=0 00:01:16.164 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:16.164 FORCE_DISTRO=true 00:01:16.164 VAGRANT_BOX_VERSION= 00:01:16.164 EXTRA_VAGRANTFILES= 00:01:16.164 NIC_MODEL=e1000 00:01:16.164 00:01:16.164 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt' 00:01:16.164 /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:01:18.699 Bringing machine 'default' up with 'libvirt' provider... 00:01:20.078 ==> default: Creating image (snapshot of base box volume). 00:01:20.337 ==> default: Creating domain with the following settings... 00:01:20.337 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721994907_a20458abbcba5715b4bf 00:01:20.337 ==> default: -- Domain type: kvm 00:01:20.337 ==> default: -- Cpus: 10 00:01:20.337 ==> default: -- Feature: acpi 00:01:20.337 ==> default: -- Feature: apic 00:01:20.337 ==> default: -- Feature: pae 00:01:20.337 ==> default: -- Memory: 12288M 00:01:20.337 ==> default: -- Memory Backing: hugepages: 00:01:20.337 ==> default: -- Management MAC: 00:01:20.337 ==> default: -- Loader: 00:01:20.337 ==> default: -- Nvram: 00:01:20.337 ==> default: -- Base box: spdk/fedora38 00:01:20.337 ==> default: -- Storage pool: default 00:01:20.337 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721994907_a20458abbcba5715b4bf.img (20G) 00:01:20.337 ==> default: -- Volume Cache: default 00:01:20.337 ==> default: -- Kernel: 00:01:20.337 ==> default: -- Initrd: 00:01:20.337 ==> default: -- Graphics Type: vnc 00:01:20.337 ==> default: -- Graphics Port: -1 00:01:20.337 ==> default: -- Graphics IP: 127.0.0.1 00:01:20.337 ==> default: -- Graphics Password: Not defined 00:01:20.337 ==> default: -- Video Type: cirrus 00:01:20.337 ==> default: -- Video VRAM: 9216 00:01:20.337 ==> default: -- Sound Type: 00:01:20.337 ==> default: -- Keymap: en-us 00:01:20.337 ==> default: -- TPM Path: 00:01:20.337 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:20.337 ==> default: -- Command line args: 00:01:20.337 ==> default: -> value=-device, 00:01:20.338 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:20.338 ==> default: -> value=-drive, 00:01:20.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:20.338 ==> default: -> value=-device, 00:01:20.338 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:20.338 ==> default: -> value=-device, 00:01:20.338 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:20.338 ==> default: -> value=-drive, 00:01:20.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:01:20.338 ==> default: -> value=-device, 00:01:20.338 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.338 ==> default: -> value=-device, 00:01:20.338 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:20.338 ==> default: -> value=-drive, 00:01:20.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:20.338 ==> default: -> value=-device, 00:01:20.338 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.338 ==> default: -> value=-drive, 00:01:20.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:20.338 ==> default: -> value=-device, 00:01:20.338 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.338 ==> default: -> value=-drive, 00:01:20.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:20.338 ==> default: -> value=-device, 00:01:20.338 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.338 ==> default: -> value=-device, 00:01:20.338 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:20.338 ==> default: -> value=-device, 00:01:20.338 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:20.338 ==> default: -> value=-drive, 00:01:20.338 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:20.338 ==> default: -> value=-device, 00:01:20.338 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:20.596 ==> default: Creating shared folders metadata... 00:01:20.596 ==> default: Starting domain. 00:01:21.974 ==> default: Waiting for domain to get an IP address... 00:01:40.061 ==> default: Waiting for SSH to become available... 00:01:40.061 ==> default: Configuring and enabling network interfaces... 00:01:44.248 default: SSH address: 192.168.121.169:22 00:01:44.248 default: SSH username: vagrant 00:01:44.248 default: SSH auth method: private key 00:01:47.550 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:55.728 ==> default: Mounting SSHFS shared folder... 00:01:57.680 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:57.680 ==> default: Checking Mount.. 00:01:59.584 ==> default: Folder Successfully Mounted! 00:01:59.584 ==> default: Running provisioner: file... 00:02:00.522 default: ~/.gitconfig => .gitconfig 00:02:01.090 00:02:01.090 SUCCESS! 00:02:01.090 00:02:01.090 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:02:01.090 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:01.090 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:02:01.090 00:02:01.100 [Pipeline] } 00:02:01.118 [Pipeline] // stage 00:02:01.128 [Pipeline] dir 00:02:01.128 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt 00:02:01.130 [Pipeline] { 00:02:01.144 [Pipeline] catchError 00:02:01.145 [Pipeline] { 00:02:01.159 [Pipeline] sh 00:02:01.441 + vagrant ssh-config --host vagrant 00:02:01.441 + sed -ne /^Host/,$p 00:02:01.441 + tee ssh_conf 00:02:04.739 Host vagrant 00:02:04.739 HostName 192.168.121.169 00:02:04.739 User vagrant 00:02:04.739 Port 22 00:02:04.739 UserKnownHostsFile /dev/null 00:02:04.739 StrictHostKeyChecking no 00:02:04.739 PasswordAuthentication no 00:02:04.739 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:04.739 IdentitiesOnly yes 00:02:04.739 LogLevel FATAL 00:02:04.739 ForwardAgent yes 00:02:04.739 ForwardX11 yes 00:02:04.739 00:02:04.754 [Pipeline] withEnv 00:02:04.757 [Pipeline] { 00:02:04.773 [Pipeline] sh 00:02:05.056 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:05.056 source /etc/os-release 00:02:05.056 [[ -e /image.version ]] && img=$(< /image.version) 00:02:05.056 # Minimal, systemd-like check. 00:02:05.056 if [[ -e /.dockerenv ]]; then 00:02:05.056 # Clear garbage from the node's name: 00:02:05.056 # agt-er_autotest_547-896 -> autotest_547-896 00:02:05.056 # $HOSTNAME is the actual container id 00:02:05.056 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:05.056 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:05.056 # We can assume this is a mount from a host where container is running, 00:02:05.056 # so fetch its hostname to easily identify the target swarm worker. 00:02:05.056 container="$(< /etc/hostname) ($agent)" 00:02:05.056 else 00:02:05.056 # Fallback 00:02:05.056 container=$agent 00:02:05.056 fi 00:02:05.056 fi 00:02:05.056 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:05.056 00:02:05.327 [Pipeline] } 00:02:05.347 [Pipeline] // withEnv 00:02:05.356 [Pipeline] setCustomBuildProperty 00:02:05.371 [Pipeline] stage 00:02:05.373 [Pipeline] { (Tests) 00:02:05.391 [Pipeline] sh 00:02:05.673 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:05.946 [Pipeline] sh 00:02:06.225 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:06.499 [Pipeline] timeout 00:02:06.499 Timeout set to expire in 40 min 00:02:06.501 [Pipeline] { 00:02:06.517 [Pipeline] sh 00:02:06.797 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:07.366 HEAD is now at 1beb86cd6 lib/idxd: add descriptors for DIX generate 00:02:07.379 [Pipeline] sh 00:02:07.663 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:07.937 [Pipeline] sh 00:02:08.216 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:08.493 [Pipeline] sh 00:02:08.778 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:09.080 ++ readlink -f spdk_repo 00:02:09.080 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:09.080 + [[ -n /home/vagrant/spdk_repo ]] 00:02:09.080 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:09.080 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:09.080 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:09.080 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:09.080 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:09.080 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:09.080 + cd /home/vagrant/spdk_repo 00:02:09.080 + source /etc/os-release 00:02:09.080 ++ NAME='Fedora Linux' 00:02:09.080 ++ VERSION='38 (Cloud Edition)' 00:02:09.080 ++ ID=fedora 00:02:09.080 ++ VERSION_ID=38 00:02:09.080 ++ VERSION_CODENAME= 00:02:09.080 ++ PLATFORM_ID=platform:f38 00:02:09.080 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:09.080 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:09.080 ++ LOGO=fedora-logo-icon 00:02:09.080 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:09.080 ++ HOME_URL=https://fedoraproject.org/ 00:02:09.080 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:09.080 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:09.080 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:09.080 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:09.080 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:09.080 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:09.080 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:09.080 ++ SUPPORT_END=2024-05-14 00:02:09.080 ++ VARIANT='Cloud Edition' 00:02:09.080 ++ VARIANT_ID=cloud 00:02:09.080 + uname -a 00:02:09.080 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:09.080 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:09.338 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:09.597 Hugepages 00:02:09.597 node hugesize free / total 00:02:09.597 node0 1048576kB 0 / 0 00:02:09.597 node0 2048kB 0 / 0 00:02:09.597 00:02:09.597 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.856 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:09.856 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:09.856 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:09.856 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:09.856 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:09.856 + rm -f /tmp/spdk-ld-path 00:02:09.857 + source autorun-spdk.conf 00:02:09.857 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.857 ++ SPDK_TEST_NVME=1 00:02:09.857 ++ SPDK_TEST_FTL=1 00:02:09.857 ++ SPDK_TEST_ISAL=1 00:02:09.857 ++ SPDK_RUN_ASAN=1 00:02:09.857 ++ SPDK_RUN_UBSAN=1 00:02:09.857 ++ SPDK_TEST_XNVME=1 00:02:09.857 ++ SPDK_TEST_NVME_FDP=1 00:02:09.857 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.857 ++ RUN_NIGHTLY=0 00:02:09.857 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:09.857 + [[ -n '' ]] 00:02:09.857 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:09.857 + for M in /var/spdk/build-*-manifest.txt 00:02:09.857 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:09.857 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.857 + for M in /var/spdk/build-*-manifest.txt 00:02:09.857 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:09.857 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.857 ++ uname 00:02:09.857 + [[ Linux == \L\i\n\u\x ]] 00:02:09.857 + sudo dmesg -T 00:02:10.117 + sudo dmesg --clear 00:02:10.117 + dmesg_pid=5139 00:02:10.117 + [[ Fedora Linux == FreeBSD ]] 00:02:10.117 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.117 + sudo dmesg -Tw 00:02:10.117 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.117 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.117 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.117 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.117 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.117 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.117 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.117 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.117 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.117 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.117 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.117 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.117 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.117 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:10.117 Test configuration: 00:02:10.117 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.117 SPDK_TEST_NVME=1 00:02:10.117 SPDK_TEST_FTL=1 00:02:10.117 SPDK_TEST_ISAL=1 00:02:10.117 SPDK_RUN_ASAN=1 00:02:10.117 SPDK_RUN_UBSAN=1 00:02:10.117 SPDK_TEST_XNVME=1 00:02:10.117 SPDK_TEST_NVME_FDP=1 00:02:10.117 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.117 RUN_NIGHTLY=0 11:55:57 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:10.117 11:55:57 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.117 11:55:57 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.117 11:55:57 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.117 11:55:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.117 11:55:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.117 11:55:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.117 11:55:57 -- paths/export.sh@5 -- $ export PATH 00:02:10.117 11:55:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.117 11:55:57 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:10.117 11:55:57 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:10.117 11:55:57 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721994957.XXXXXX 00:02:10.117 11:55:58 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721994957.tnmAV3 00:02:10.117 11:55:58 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:10.117 11:55:58 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:10.117 11:55:58 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:10.117 11:55:58 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:10.117 11:55:58 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.117 11:55:58 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:10.117 11:55:58 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:02:10.117 11:55:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.117 11:55:58 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:10.117 11:55:58 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:10.117 11:55:58 -- pm/common@17 -- $ local monitor 00:02:10.117 11:55:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.117 11:55:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.117 11:55:58 -- pm/common@25 -- $ sleep 1 00:02:10.117 11:55:58 -- pm/common@21 -- $ date +%s 00:02:10.117 11:55:58 -- pm/common@21 -- $ date +%s 00:02:10.117 11:55:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721994958 00:02:10.117 11:55:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721994958 00:02:10.376 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721994958_collect-vmstat.pm.log 00:02:10.376 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721994958_collect-cpu-load.pm.log 00:02:11.313 11:55:59 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:11.313 11:55:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.313 11:55:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.313 11:55:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:11.313 11:55:59 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.313 Fri Jul 26 11:55:59 AM UTC 2024 00:02:11.313 11:55:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.313 v24.09-pre-327-g1beb86cd6 00:02:11.313 11:55:59 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:11.313 11:55:59 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:11.313 11:55:59 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:11.313 11:55:59 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:11.313 11:55:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.313 ************************************ 00:02:11.313 START TEST asan 00:02:11.313 ************************************ 00:02:11.313 using asan 00:02:11.313 11:55:59 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:11.313 00:02:11.313 real 0m0.000s 00:02:11.313 user 0m0.000s 00:02:11.313 sys 0m0.000s 00:02:11.313 11:55:59 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:11.313 11:55:59 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.313 ************************************ 00:02:11.313 END TEST asan 00:02:11.313 ************************************ 00:02:11.313 11:55:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.313 11:55:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.313 11:55:59 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:11.313 11:55:59 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:11.313 11:55:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.313 ************************************ 00:02:11.313 START TEST ubsan 00:02:11.313 ************************************ 00:02:11.313 using ubsan 00:02:11.313 11:55:59 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:11.313 00:02:11.313 real 0m0.000s 00:02:11.313 user 0m0.000s 00:02:11.313 sys 0m0.000s 00:02:11.313 11:55:59 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:11.313 11:55:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.313 ************************************ 00:02:11.313 END TEST ubsan 00:02:11.313 ************************************ 00:02:11.313 11:55:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:11.313 11:55:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.313 11:55:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.313 11:55:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:11.313 11:55:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:11.313 11:55:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:11.313 11:55:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:11.313 11:55:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:11.313 11:55:59 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:11.573 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:11.573 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:12.141 Using 'verbs' RDMA provider 00:02:27.973 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:42.860 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:42.860 Creating mk/config.mk...done. 00:02:42.860 Creating mk/cc.flags.mk...done. 00:02:42.860 Type 'make' to build. 00:02:42.860 11:56:30 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:42.860 11:56:30 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:42.860 11:56:30 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:42.860 11:56:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:42.860 ************************************ 00:02:42.860 START TEST make 00:02:42.860 ************************************ 00:02:42.860 11:56:30 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:43.120 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:43.120 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:43.120 meson setup builddir \ 00:02:43.120 -Dwith-libaio=enabled \ 00:02:43.120 -Dwith-liburing=enabled \ 00:02:43.120 -Dwith-libvfn=disabled \ 00:02:43.120 -Dwith-spdk=false && \ 00:02:43.120 meson compile -C builddir && \ 00:02:43.120 cd -) 00:02:43.120 make[1]: Nothing to be done for 'all'. 00:02:45.648 The Meson build system 00:02:45.648 Version: 1.3.1 00:02:45.648 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:45.648 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:45.648 Build type: native build 00:02:45.648 Project name: xnvme 00:02:45.648 Project version: 0.7.3 00:02:45.648 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:45.648 C linker for the host machine: cc ld.bfd 2.39-16 00:02:45.648 Host machine cpu family: x86_64 00:02:45.648 Host machine cpu: x86_64 00:02:45.648 Message: host_machine.system: linux 00:02:45.648 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:45.648 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:45.648 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:45.648 Run-time dependency threads found: YES 00:02:45.648 Has header "setupapi.h" : NO 00:02:45.648 Has header "linux/blkzoned.h" : YES 00:02:45.648 Has header "linux/blkzoned.h" : YES (cached) 00:02:45.648 Has header "libaio.h" : YES 00:02:45.648 Library aio found: YES 00:02:45.648 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:45.648 Run-time dependency liburing found: YES 2.2 00:02:45.648 Dependency libvfn skipped: feature with-libvfn disabled 00:02:45.648 Run-time dependency appleframeworks found: NO (tried framework) 00:02:45.648 Run-time dependency appleframeworks found: NO (tried framework) 00:02:45.648 Configuring xnvme_config.h using configuration 00:02:45.648 Configuring xnvme.spec using configuration 00:02:45.648 Run-time dependency bash-completion found: YES 2.11 00:02:45.648 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:45.648 Program cp found: YES (/usr/bin/cp) 00:02:45.648 Has header "winsock2.h" : NO 00:02:45.648 Has header "dbghelp.h" : NO 00:02:45.648 Library rpcrt4 found: NO 00:02:45.648 Library rt found: YES 00:02:45.648 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:45.648 Found CMake: /usr/bin/cmake (3.27.7) 00:02:45.648 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:45.648 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:45.648 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:45.648 Build targets in project: 32 00:02:45.648 00:02:45.648 xnvme 0.7.3 00:02:45.648 00:02:45.648 User defined options 00:02:45.648 with-libaio : enabled 00:02:45.648 with-liburing: enabled 00:02:45.648 with-libvfn : disabled 00:02:45.648 with-spdk : false 00:02:45.648 00:02:45.648 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:45.648 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:45.648 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:45.648 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:45.648 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:45.648 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:45.648 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:45.648 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:45.648 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:45.648 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:45.648 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:45.648 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:45.648 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:45.648 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:45.648 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:45.648 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:45.648 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:45.905 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:45.905 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:45.905 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:45.905 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:45.905 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:45.905 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:45.905 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:45.905 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:45.905 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:45.905 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:45.905 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:45.905 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:45.905 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:45.905 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:45.905 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:45.905 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:45.905 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:45.905 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:45.905 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:45.905 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:45.905 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:45.905 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:45.905 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:45.905 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:45.905 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:45.905 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:45.905 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:45.905 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:45.905 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:45.905 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:45.905 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:45.905 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:45.905 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:45.905 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:45.905 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:45.905 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:45.905 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:45.905 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:46.162 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:46.162 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:46.162 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:46.162 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:46.162 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:46.162 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:46.162 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:46.162 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:46.162 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:46.162 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:46.162 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:46.162 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:46.162 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:46.162 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:46.162 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:46.162 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:46.162 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:46.162 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:46.419 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:46.419 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:46.419 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:46.419 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:46.419 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:46.419 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:46.419 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:46.419 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:46.419 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:46.419 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:46.419 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:46.419 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:46.419 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:46.419 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:46.419 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:46.419 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:46.419 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:46.419 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:46.419 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:46.419 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:46.419 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:46.682 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:46.682 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:46.682 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:46.682 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:46.682 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:46.682 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:46.682 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:46.682 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:46.682 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:46.682 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:46.682 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:46.682 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:46.682 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:46.682 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:46.682 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:46.682 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:46.682 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:46.682 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:46.682 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:46.682 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:46.682 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:46.682 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:46.682 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:46.682 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:46.682 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:46.682 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:46.682 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:46.682 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:46.682 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:46.682 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:46.682 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:46.682 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:46.946 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:46.946 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:46.946 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:46.946 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:46.946 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:46.946 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:46.946 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:46.946 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:46.946 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:46.946 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:46.946 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:46.946 [136/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:46.946 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:46.946 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:46.946 [139/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:46.946 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:46.946 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:46.946 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:46.946 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:46.946 [144/203] Linking target lib/libxnvme.so 00:02:46.946 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:47.203 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:47.203 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:47.203 [148/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:47.203 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:47.203 [150/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:47.203 [151/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:47.203 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:47.203 [153/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:47.203 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:47.203 [155/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:47.203 [156/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:47.203 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:47.203 [158/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:47.203 [159/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:47.203 [160/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:47.203 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:47.203 [162/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:47.203 [163/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:47.461 [164/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:47.461 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:47.461 [166/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:47.461 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:47.461 [168/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:47.461 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:47.461 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:47.461 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:47.461 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:47.719 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:47.719 [174/203] Linking static target lib/libxnvme.a 00:02:47.719 [175/203] Linking target tests/xnvme_tests_lblk 00:02:47.719 [176/203] Linking target tests/xnvme_tests_enum 00:02:47.719 [177/203] Linking target tests/xnvme_tests_cli 00:02:47.719 [178/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:47.719 [179/203] Linking target tests/xnvme_tests_ioworker 00:02:47.719 [180/203] Linking target tests/xnvme_tests_buf 00:02:47.719 [181/203] Linking target tests/xnvme_tests_znd_append 00:02:47.719 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:47.719 [183/203] Linking target tests/xnvme_tests_znd_state 00:02:47.719 [184/203] Linking target tests/xnvme_tests_async_intf 00:02:47.719 [185/203] Linking target tests/xnvme_tests_scc 00:02:47.719 [186/203] Linking target tests/xnvme_tests_xnvme_file 00:02:47.719 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:47.719 [188/203] Linking target tools/xnvme_file 00:02:47.719 [189/203] Linking target tests/xnvme_tests_kvs 00:02:47.719 [190/203] Linking target tests/xnvme_tests_map 00:02:47.719 [191/203] Linking target tools/zoned 00:02:47.719 [192/203] Linking target tools/lblk 00:02:47.719 [193/203] Linking target tools/xdd 00:02:47.719 [194/203] Linking target tools/xnvme 00:02:47.719 [195/203] Linking target tools/kvs 00:02:47.719 [196/203] Linking target examples/xnvme_enum 00:02:47.719 [197/203] Linking target examples/xnvme_dev 00:02:47.719 [198/203] Linking target examples/xnvme_single_async 00:02:47.719 [199/203] Linking target examples/xnvme_io_async 00:02:47.719 [200/203] Linking target examples/xnvme_hello 00:02:47.719 [201/203] Linking target examples/zoned_io_async 00:02:47.719 [202/203] Linking target examples/xnvme_single_sync 00:02:47.719 [203/203] Linking target examples/zoned_io_sync 00:02:47.719 INFO: autodetecting backend as ninja 00:02:47.719 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:47.978 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:53.244 The Meson build system 00:02:53.244 Version: 1.3.1 00:02:53.244 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:53.244 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:53.244 Build type: native build 00:02:53.244 Program cat found: YES (/usr/bin/cat) 00:02:53.244 Project name: DPDK 00:02:53.244 Project version: 24.03.0 00:02:53.244 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:53.244 C linker for the host machine: cc ld.bfd 2.39-16 00:02:53.244 Host machine cpu family: x86_64 00:02:53.244 Host machine cpu: x86_64 00:02:53.244 Message: ## Building in Developer Mode ## 00:02:53.244 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:53.244 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:53.244 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:53.244 Program python3 found: YES (/usr/bin/python3) 00:02:53.244 Program cat found: YES (/usr/bin/cat) 00:02:53.244 Compiler for C supports arguments -march=native: YES 00:02:53.244 Checking for size of "void *" : 8 00:02:53.244 Checking for size of "void *" : 8 (cached) 00:02:53.244 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:53.244 Library m found: YES 00:02:53.244 Library numa found: YES 00:02:53.244 Has header "numaif.h" : YES 00:02:53.244 Library fdt found: NO 00:02:53.244 Library execinfo found: NO 00:02:53.244 Has header "execinfo.h" : YES 00:02:53.244 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:53.244 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:53.244 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:53.244 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:53.244 Run-time dependency openssl found: YES 3.0.9 00:02:53.244 Run-time dependency libpcap found: YES 1.10.4 00:02:53.244 Has header "pcap.h" with dependency libpcap: YES 00:02:53.244 Compiler for C supports arguments -Wcast-qual: YES 00:02:53.244 Compiler for C supports arguments -Wdeprecated: YES 00:02:53.244 Compiler for C supports arguments -Wformat: YES 00:02:53.244 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:53.244 Compiler for C supports arguments -Wformat-security: NO 00:02:53.244 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:53.244 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:53.244 Compiler for C supports arguments -Wnested-externs: YES 00:02:53.244 Compiler for C supports arguments -Wold-style-definition: YES 00:02:53.244 Compiler for C supports arguments -Wpointer-arith: YES 00:02:53.244 Compiler for C supports arguments -Wsign-compare: YES 00:02:53.244 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:53.244 Compiler for C supports arguments -Wundef: YES 00:02:53.244 Compiler for C supports arguments -Wwrite-strings: YES 00:02:53.244 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:53.244 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:53.244 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:53.244 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:53.244 Program objdump found: YES (/usr/bin/objdump) 00:02:53.244 Compiler for C supports arguments -mavx512f: YES 00:02:53.244 Checking if "AVX512 checking" compiles: YES 00:02:53.244 Fetching value of define "__SSE4_2__" : 1 00:02:53.244 Fetching value of define "__AES__" : 1 00:02:53.244 Fetching value of define "__AVX__" : 1 00:02:53.244 Fetching value of define "__AVX2__" : 1 00:02:53.244 Fetching value of define "__AVX512BW__" : 1 00:02:53.244 Fetching value of define "__AVX512CD__" : 1 00:02:53.244 Fetching value of define "__AVX512DQ__" : 1 00:02:53.244 Fetching value of define "__AVX512F__" : 1 00:02:53.244 Fetching value of define "__AVX512VL__" : 1 00:02:53.244 Fetching value of define "__PCLMUL__" : 1 00:02:53.244 Fetching value of define "__RDRND__" : 1 00:02:53.244 Fetching value of define "__RDSEED__" : 1 00:02:53.244 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:53.244 Fetching value of define "__znver1__" : (undefined) 00:02:53.244 Fetching value of define "__znver2__" : (undefined) 00:02:53.244 Fetching value of define "__znver3__" : (undefined) 00:02:53.244 Fetching value of define "__znver4__" : (undefined) 00:02:53.244 Library asan found: YES 00:02:53.244 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:53.244 Message: lib/log: Defining dependency "log" 00:02:53.244 Message: lib/kvargs: Defining dependency "kvargs" 00:02:53.244 Message: lib/telemetry: Defining dependency "telemetry" 00:02:53.244 Library rt found: YES 00:02:53.244 Checking for function "getentropy" : NO 00:02:53.244 Message: lib/eal: Defining dependency "eal" 00:02:53.244 Message: lib/ring: Defining dependency "ring" 00:02:53.244 Message: lib/rcu: Defining dependency "rcu" 00:02:53.244 Message: lib/mempool: Defining dependency "mempool" 00:02:53.244 Message: lib/mbuf: Defining dependency "mbuf" 00:02:53.244 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:53.244 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:53.244 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:53.244 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:53.244 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:53.244 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:53.244 Compiler for C supports arguments -mpclmul: YES 00:02:53.244 Compiler for C supports arguments -maes: YES 00:02:53.244 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:53.244 Compiler for C supports arguments -mavx512bw: YES 00:02:53.244 Compiler for C supports arguments -mavx512dq: YES 00:02:53.244 Compiler for C supports arguments -mavx512vl: YES 00:02:53.244 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:53.244 Compiler for C supports arguments -mavx2: YES 00:02:53.244 Compiler for C supports arguments -mavx: YES 00:02:53.244 Message: lib/net: Defining dependency "net" 00:02:53.244 Message: lib/meter: Defining dependency "meter" 00:02:53.244 Message: lib/ethdev: Defining dependency "ethdev" 00:02:53.244 Message: lib/pci: Defining dependency "pci" 00:02:53.244 Message: lib/cmdline: Defining dependency "cmdline" 00:02:53.244 Message: lib/hash: Defining dependency "hash" 00:02:53.244 Message: lib/timer: Defining dependency "timer" 00:02:53.244 Message: lib/compressdev: Defining dependency "compressdev" 00:02:53.244 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:53.244 Message: lib/dmadev: Defining dependency "dmadev" 00:02:53.244 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:53.244 Message: lib/power: Defining dependency "power" 00:02:53.244 Message: lib/reorder: Defining dependency "reorder" 00:02:53.244 Message: lib/security: Defining dependency "security" 00:02:53.244 Has header "linux/userfaultfd.h" : YES 00:02:53.244 Has header "linux/vduse.h" : YES 00:02:53.244 Message: lib/vhost: Defining dependency "vhost" 00:02:53.244 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:53.244 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:53.244 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:53.244 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:53.244 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:53.244 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:53.244 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:53.244 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:53.244 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:53.244 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:53.244 Program doxygen found: YES (/usr/bin/doxygen) 00:02:53.244 Configuring doxy-api-html.conf using configuration 00:02:53.244 Configuring doxy-api-man.conf using configuration 00:02:53.244 Program mandb found: YES (/usr/bin/mandb) 00:02:53.244 Program sphinx-build found: NO 00:02:53.244 Configuring rte_build_config.h using configuration 00:02:53.244 Message: 00:02:53.244 ================= 00:02:53.244 Applications Enabled 00:02:53.244 ================= 00:02:53.244 00:02:53.244 apps: 00:02:53.244 00:02:53.244 00:02:53.244 Message: 00:02:53.245 ================= 00:02:53.245 Libraries Enabled 00:02:53.245 ================= 00:02:53.245 00:02:53.245 libs: 00:02:53.245 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:53.245 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:53.245 cryptodev, dmadev, power, reorder, security, vhost, 00:02:53.245 00:02:53.245 Message: 00:02:53.245 =============== 00:02:53.245 Drivers Enabled 00:02:53.245 =============== 00:02:53.245 00:02:53.245 common: 00:02:53.245 00:02:53.245 bus: 00:02:53.245 pci, vdev, 00:02:53.245 mempool: 00:02:53.245 ring, 00:02:53.245 dma: 00:02:53.245 00:02:53.245 net: 00:02:53.245 00:02:53.245 crypto: 00:02:53.245 00:02:53.245 compress: 00:02:53.245 00:02:53.245 vdpa: 00:02:53.245 00:02:53.245 00:02:53.245 Message: 00:02:53.245 ================= 00:02:53.245 Content Skipped 00:02:53.245 ================= 00:02:53.245 00:02:53.245 apps: 00:02:53.245 dumpcap: explicitly disabled via build config 00:02:53.245 graph: explicitly disabled via build config 00:02:53.245 pdump: explicitly disabled via build config 00:02:53.245 proc-info: explicitly disabled via build config 00:02:53.245 test-acl: explicitly disabled via build config 00:02:53.245 test-bbdev: explicitly disabled via build config 00:02:53.245 test-cmdline: explicitly disabled via build config 00:02:53.245 test-compress-perf: explicitly disabled via build config 00:02:53.245 test-crypto-perf: explicitly disabled via build config 00:02:53.245 test-dma-perf: explicitly disabled via build config 00:02:53.245 test-eventdev: explicitly disabled via build config 00:02:53.245 test-fib: explicitly disabled via build config 00:02:53.245 test-flow-perf: explicitly disabled via build config 00:02:53.245 test-gpudev: explicitly disabled via build config 00:02:53.245 test-mldev: explicitly disabled via build config 00:02:53.245 test-pipeline: explicitly disabled via build config 00:02:53.245 test-pmd: explicitly disabled via build config 00:02:53.245 test-regex: explicitly disabled via build config 00:02:53.245 test-sad: explicitly disabled via build config 00:02:53.245 test-security-perf: explicitly disabled via build config 00:02:53.245 00:02:53.245 libs: 00:02:53.245 argparse: explicitly disabled via build config 00:02:53.245 metrics: explicitly disabled via build config 00:02:53.245 acl: explicitly disabled via build config 00:02:53.245 bbdev: explicitly disabled via build config 00:02:53.245 bitratestats: explicitly disabled via build config 00:02:53.245 bpf: explicitly disabled via build config 00:02:53.245 cfgfile: explicitly disabled via build config 00:02:53.245 distributor: explicitly disabled via build config 00:02:53.245 efd: explicitly disabled via build config 00:02:53.245 eventdev: explicitly disabled via build config 00:02:53.245 dispatcher: explicitly disabled via build config 00:02:53.245 gpudev: explicitly disabled via build config 00:02:53.245 gro: explicitly disabled via build config 00:02:53.245 gso: explicitly disabled via build config 00:02:53.245 ip_frag: explicitly disabled via build config 00:02:53.245 jobstats: explicitly disabled via build config 00:02:53.245 latencystats: explicitly disabled via build config 00:02:53.245 lpm: explicitly disabled via build config 00:02:53.245 member: explicitly disabled via build config 00:02:53.245 pcapng: explicitly disabled via build config 00:02:53.245 rawdev: explicitly disabled via build config 00:02:53.245 regexdev: explicitly disabled via build config 00:02:53.245 mldev: explicitly disabled via build config 00:02:53.245 rib: explicitly disabled via build config 00:02:53.245 sched: explicitly disabled via build config 00:02:53.245 stack: explicitly disabled via build config 00:02:53.245 ipsec: explicitly disabled via build config 00:02:53.245 pdcp: explicitly disabled via build config 00:02:53.245 fib: explicitly disabled via build config 00:02:53.245 port: explicitly disabled via build config 00:02:53.245 pdump: explicitly disabled via build config 00:02:53.245 table: explicitly disabled via build config 00:02:53.245 pipeline: explicitly disabled via build config 00:02:53.245 graph: explicitly disabled via build config 00:02:53.245 node: explicitly disabled via build config 00:02:53.245 00:02:53.245 drivers: 00:02:53.245 common/cpt: not in enabled drivers build config 00:02:53.245 common/dpaax: not in enabled drivers build config 00:02:53.245 common/iavf: not in enabled drivers build config 00:02:53.245 common/idpf: not in enabled drivers build config 00:02:53.245 common/ionic: not in enabled drivers build config 00:02:53.245 common/mvep: not in enabled drivers build config 00:02:53.245 common/octeontx: not in enabled drivers build config 00:02:53.245 bus/auxiliary: not in enabled drivers build config 00:02:53.245 bus/cdx: not in enabled drivers build config 00:02:53.245 bus/dpaa: not in enabled drivers build config 00:02:53.245 bus/fslmc: not in enabled drivers build config 00:02:53.245 bus/ifpga: not in enabled drivers build config 00:02:53.245 bus/platform: not in enabled drivers build config 00:02:53.245 bus/uacce: not in enabled drivers build config 00:02:53.245 bus/vmbus: not in enabled drivers build config 00:02:53.245 common/cnxk: not in enabled drivers build config 00:02:53.245 common/mlx5: not in enabled drivers build config 00:02:53.245 common/nfp: not in enabled drivers build config 00:02:53.245 common/nitrox: not in enabled drivers build config 00:02:53.245 common/qat: not in enabled drivers build config 00:02:53.245 common/sfc_efx: not in enabled drivers build config 00:02:53.245 mempool/bucket: not in enabled drivers build config 00:02:53.245 mempool/cnxk: not in enabled drivers build config 00:02:53.245 mempool/dpaa: not in enabled drivers build config 00:02:53.245 mempool/dpaa2: not in enabled drivers build config 00:02:53.245 mempool/octeontx: not in enabled drivers build config 00:02:53.245 mempool/stack: not in enabled drivers build config 00:02:53.245 dma/cnxk: not in enabled drivers build config 00:02:53.245 dma/dpaa: not in enabled drivers build config 00:02:53.245 dma/dpaa2: not in enabled drivers build config 00:02:53.245 dma/hisilicon: not in enabled drivers build config 00:02:53.245 dma/idxd: not in enabled drivers build config 00:02:53.245 dma/ioat: not in enabled drivers build config 00:02:53.245 dma/skeleton: not in enabled drivers build config 00:02:53.245 net/af_packet: not in enabled drivers build config 00:02:53.245 net/af_xdp: not in enabled drivers build config 00:02:53.245 net/ark: not in enabled drivers build config 00:02:53.245 net/atlantic: not in enabled drivers build config 00:02:53.245 net/avp: not in enabled drivers build config 00:02:53.245 net/axgbe: not in enabled drivers build config 00:02:53.245 net/bnx2x: not in enabled drivers build config 00:02:53.245 net/bnxt: not in enabled drivers build config 00:02:53.245 net/bonding: not in enabled drivers build config 00:02:53.245 net/cnxk: not in enabled drivers build config 00:02:53.245 net/cpfl: not in enabled drivers build config 00:02:53.245 net/cxgbe: not in enabled drivers build config 00:02:53.245 net/dpaa: not in enabled drivers build config 00:02:53.245 net/dpaa2: not in enabled drivers build config 00:02:53.245 net/e1000: not in enabled drivers build config 00:02:53.245 net/ena: not in enabled drivers build config 00:02:53.245 net/enetc: not in enabled drivers build config 00:02:53.245 net/enetfec: not in enabled drivers build config 00:02:53.245 net/enic: not in enabled drivers build config 00:02:53.245 net/failsafe: not in enabled drivers build config 00:02:53.245 net/fm10k: not in enabled drivers build config 00:02:53.245 net/gve: not in enabled drivers build config 00:02:53.245 net/hinic: not in enabled drivers build config 00:02:53.245 net/hns3: not in enabled drivers build config 00:02:53.245 net/i40e: not in enabled drivers build config 00:02:53.245 net/iavf: not in enabled drivers build config 00:02:53.245 net/ice: not in enabled drivers build config 00:02:53.245 net/idpf: not in enabled drivers build config 00:02:53.245 net/igc: not in enabled drivers build config 00:02:53.245 net/ionic: not in enabled drivers build config 00:02:53.245 net/ipn3ke: not in enabled drivers build config 00:02:53.245 net/ixgbe: not in enabled drivers build config 00:02:53.245 net/mana: not in enabled drivers build config 00:02:53.245 net/memif: not in enabled drivers build config 00:02:53.245 net/mlx4: not in enabled drivers build config 00:02:53.245 net/mlx5: not in enabled drivers build config 00:02:53.245 net/mvneta: not in enabled drivers build config 00:02:53.245 net/mvpp2: not in enabled drivers build config 00:02:53.245 net/netvsc: not in enabled drivers build config 00:02:53.245 net/nfb: not in enabled drivers build config 00:02:53.245 net/nfp: not in enabled drivers build config 00:02:53.246 net/ngbe: not in enabled drivers build config 00:02:53.246 net/null: not in enabled drivers build config 00:02:53.246 net/octeontx: not in enabled drivers build config 00:02:53.246 net/octeon_ep: not in enabled drivers build config 00:02:53.246 net/pcap: not in enabled drivers build config 00:02:53.246 net/pfe: not in enabled drivers build config 00:02:53.246 net/qede: not in enabled drivers build config 00:02:53.246 net/ring: not in enabled drivers build config 00:02:53.246 net/sfc: not in enabled drivers build config 00:02:53.246 net/softnic: not in enabled drivers build config 00:02:53.246 net/tap: not in enabled drivers build config 00:02:53.246 net/thunderx: not in enabled drivers build config 00:02:53.246 net/txgbe: not in enabled drivers build config 00:02:53.246 net/vdev_netvsc: not in enabled drivers build config 00:02:53.246 net/vhost: not in enabled drivers build config 00:02:53.246 net/virtio: not in enabled drivers build config 00:02:53.246 net/vmxnet3: not in enabled drivers build config 00:02:53.246 raw/*: missing internal dependency, "rawdev" 00:02:53.246 crypto/armv8: not in enabled drivers build config 00:02:53.246 crypto/bcmfs: not in enabled drivers build config 00:02:53.246 crypto/caam_jr: not in enabled drivers build config 00:02:53.246 crypto/ccp: not in enabled drivers build config 00:02:53.246 crypto/cnxk: not in enabled drivers build config 00:02:53.246 crypto/dpaa_sec: not in enabled drivers build config 00:02:53.246 crypto/dpaa2_sec: not in enabled drivers build config 00:02:53.246 crypto/ipsec_mb: not in enabled drivers build config 00:02:53.246 crypto/mlx5: not in enabled drivers build config 00:02:53.246 crypto/mvsam: not in enabled drivers build config 00:02:53.246 crypto/nitrox: not in enabled drivers build config 00:02:53.246 crypto/null: not in enabled drivers build config 00:02:53.246 crypto/octeontx: not in enabled drivers build config 00:02:53.246 crypto/openssl: not in enabled drivers build config 00:02:53.246 crypto/scheduler: not in enabled drivers build config 00:02:53.246 crypto/uadk: not in enabled drivers build config 00:02:53.246 crypto/virtio: not in enabled drivers build config 00:02:53.246 compress/isal: not in enabled drivers build config 00:02:53.246 compress/mlx5: not in enabled drivers build config 00:02:53.246 compress/nitrox: not in enabled drivers build config 00:02:53.246 compress/octeontx: not in enabled drivers build config 00:02:53.246 compress/zlib: not in enabled drivers build config 00:02:53.246 regex/*: missing internal dependency, "regexdev" 00:02:53.246 ml/*: missing internal dependency, "mldev" 00:02:53.246 vdpa/ifc: not in enabled drivers build config 00:02:53.246 vdpa/mlx5: not in enabled drivers build config 00:02:53.246 vdpa/nfp: not in enabled drivers build config 00:02:53.246 vdpa/sfc: not in enabled drivers build config 00:02:53.246 event/*: missing internal dependency, "eventdev" 00:02:53.246 baseband/*: missing internal dependency, "bbdev" 00:02:53.246 gpu/*: missing internal dependency, "gpudev" 00:02:53.246 00:02:53.246 00:02:53.246 Build targets in project: 85 00:02:53.246 00:02:53.246 DPDK 24.03.0 00:02:53.246 00:02:53.246 User defined options 00:02:53.246 buildtype : debug 00:02:53.246 default_library : shared 00:02:53.246 libdir : lib 00:02:53.246 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:53.246 b_sanitize : address 00:02:53.246 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:53.246 c_link_args : 00:02:53.246 cpu_instruction_set: native 00:02:53.246 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:53.246 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:53.246 enable_docs : false 00:02:53.246 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:53.246 enable_kmods : false 00:02:53.246 max_lcores : 128 00:02:53.246 tests : false 00:02:53.246 00:02:53.246 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:53.246 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:53.246 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:53.246 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:53.246 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:53.246 [4/268] Linking static target lib/librte_log.a 00:02:53.246 [5/268] Linking static target lib/librte_kvargs.a 00:02:53.246 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:53.505 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:53.505 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:53.763 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:53.763 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.763 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:53.763 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:53.763 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:53.763 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:53.763 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:53.763 [16/268] Linking static target lib/librte_telemetry.a 00:02:53.763 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:53.763 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:54.022 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.022 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:54.280 [21/268] Linking target lib/librte_log.so.24.1 00:02:54.280 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:54.280 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:54.280 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:54.280 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:54.280 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:54.280 [27/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:54.280 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:54.539 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:54.539 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:54.539 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:54.539 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:54.539 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.539 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:54.799 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:54.799 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:54.799 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:54.799 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:54.799 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:55.058 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:55.058 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:55.058 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:55.058 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:55.058 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:55.058 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:55.058 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:55.317 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:55.317 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:55.317 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:55.317 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:55.575 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:55.575 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:55.575 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:55.575 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:55.575 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:55.834 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:55.834 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:55.834 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:55.834 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:55.834 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:55.834 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:55.834 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:56.161 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:56.161 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:56.161 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:56.161 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:56.447 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:56.447 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:56.447 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:56.447 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:56.447 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:56.447 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:56.706 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:56.706 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:56.706 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:56.706 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:56.706 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:56.706 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:56.966 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:56.966 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:56.966 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:56.966 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:56.966 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:56.966 [84/268] Linking static target lib/librte_ring.a 00:02:57.225 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:57.225 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:57.225 [87/268] Linking static target lib/librte_eal.a 00:02:57.225 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:57.485 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:57.485 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:57.745 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:57.745 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.745 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:57.745 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:57.745 [95/268] Linking static target lib/librte_rcu.a 00:02:57.745 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:57.745 [97/268] Linking static target lib/librte_mempool.a 00:02:57.745 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:58.005 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:58.005 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:58.005 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:58.264 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.264 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:58.264 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:58.264 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:58.523 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:58.523 [107/268] Linking static target lib/librte_net.a 00:02:58.523 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:58.523 [109/268] Linking static target lib/librte_meter.a 00:02:58.523 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:58.782 [111/268] Linking static target lib/librte_mbuf.a 00:02:58.782 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:58.782 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:58.782 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:59.042 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.042 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.042 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:59.042 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.301 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:59.561 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:59.820 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.820 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:59.820 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:59.820 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:59.820 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:59.820 [126/268] Linking static target lib/librte_pci.a 00:02:59.820 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:00.079 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:00.079 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:00.079 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:00.079 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:00.079 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:00.339 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:00.339 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:00.339 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.339 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:00.339 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:00.339 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:00.339 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:00.339 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:00.339 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:00.339 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:00.339 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:00.598 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:00.598 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:00.598 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:00.598 [147/268] Linking static target lib/librte_cmdline.a 00:03:00.598 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:00.857 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:00.857 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:00.857 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:00.857 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:00.857 [153/268] Linking static target lib/librte_timer.a 00:03:01.116 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:01.116 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:01.376 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:01.376 [157/268] Linking static target lib/librte_compressdev.a 00:03:01.376 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:01.376 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.635 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:01.635 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:01.635 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:01.894 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:01.894 [164/268] Linking static target lib/librte_dmadev.a 00:03:01.894 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:01.894 [166/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:01.894 [167/268] Linking static target lib/librte_hash.a 00:03:01.894 [168/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:01.894 [169/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:02.152 [170/268] Linking static target lib/librte_ethdev.a 00:03:02.152 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:02.152 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.152 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:02.152 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.411 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:02.411 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:02.411 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:02.411 [178/268] Linking static target lib/librte_cryptodev.a 00:03:02.669 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:02.669 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:02.669 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:02.669 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.669 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:02.669 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:02.929 [185/268] Linking static target lib/librte_power.a 00:03:02.929 [186/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.188 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:03.188 [188/268] Linking static target lib/librte_reorder.a 00:03:03.188 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:03.188 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:03.188 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:03.188 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:03.188 [193/268] Linking static target lib/librte_security.a 00:03:03.756 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.756 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:04.027 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.027 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.027 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:04.027 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:04.027 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:04.303 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:04.303 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.303 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.563 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.563 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:04.563 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:04.563 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.821 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.821 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:04.821 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:04.821 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:04.821 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:04.821 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.821 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.821 [215/268] Linking static target drivers/librte_bus_vdev.a 00:03:05.080 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:05.080 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:05.080 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.080 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.080 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:05.080 [221/268] Linking static target drivers/librte_bus_pci.a 00:03:05.080 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.080 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:05.080 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.080 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.080 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:05.648 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.214 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:09.504 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:09.504 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.504 [231/268] Linking static target lib/librte_vhost.a 00:03:09.504 [232/268] Linking target lib/librte_eal.so.24.1 00:03:09.504 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:09.504 [234/268] Linking target lib/librte_timer.so.24.1 00:03:09.763 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:09.763 [236/268] Linking target lib/librte_pci.so.24.1 00:03:09.763 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:09.763 [238/268] Linking target lib/librte_ring.so.24.1 00:03:09.763 [239/268] Linking target lib/librte_meter.so.24.1 00:03:09.763 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:09.763 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:09.763 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:09.763 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:09.763 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:09.763 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:09.763 [246/268] Linking target lib/librte_rcu.so.24.1 00:03:09.763 [247/268] Linking target lib/librte_mempool.so.24.1 00:03:10.022 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:10.022 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:10.022 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:10.022 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:10.022 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:10.281 [253/268] Linking target lib/librte_reorder.so.24.1 00:03:10.281 [254/268] Linking target lib/librte_net.so.24.1 00:03:10.281 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:10.281 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:03:10.281 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:10.281 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:10.281 [259/268] Linking target lib/librte_hash.so.24.1 00:03:10.281 [260/268] Linking target lib/librte_security.so.24.1 00:03:10.281 [261/268] Linking target lib/librte_cmdline.so.24.1 00:03:10.540 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:11.475 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.475 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:11.475 [265/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.734 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:11.734 [267/268] Linking target lib/librte_power.so.24.1 00:03:11.734 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:11.734 INFO: autodetecting backend as ninja 00:03:11.734 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:13.110 CC lib/log/log_flags.o 00:03:13.110 CC lib/log/log.o 00:03:13.110 CC lib/log/log_deprecated.o 00:03:13.110 CC lib/ut/ut.o 00:03:13.110 CC lib/ut_mock/mock.o 00:03:13.110 LIB libspdk_log.a 00:03:13.110 LIB libspdk_ut.a 00:03:13.110 SO libspdk_log.so.7.0 00:03:13.110 SO libspdk_ut.so.2.0 00:03:13.110 LIB libspdk_ut_mock.a 00:03:13.110 SO libspdk_ut_mock.so.6.0 00:03:13.110 SYMLINK libspdk_log.so 00:03:13.110 SYMLINK libspdk_ut.so 00:03:13.369 SYMLINK libspdk_ut_mock.so 00:03:13.369 CXX lib/trace_parser/trace.o 00:03:13.369 CC lib/dma/dma.o 00:03:13.369 CC lib/ioat/ioat.o 00:03:13.628 CC lib/util/base64.o 00:03:13.628 CC lib/util/bit_array.o 00:03:13.628 CC lib/util/cpuset.o 00:03:13.628 CC lib/util/crc16.o 00:03:13.628 CC lib/util/crc32.o 00:03:13.628 CC lib/util/crc32c.o 00:03:13.628 CC lib/vfio_user/host/vfio_user_pci.o 00:03:13.628 CC lib/vfio_user/host/vfio_user.o 00:03:13.628 CC lib/util/crc32_ieee.o 00:03:13.628 LIB libspdk_dma.a 00:03:13.628 SO libspdk_dma.so.4.0 00:03:13.628 CC lib/util/crc64.o 00:03:13.628 CC lib/util/dif.o 00:03:13.628 CC lib/util/fd.o 00:03:13.628 CC lib/util/fd_group.o 00:03:13.628 SYMLINK libspdk_dma.so 00:03:13.628 CC lib/util/file.o 00:03:13.888 CC lib/util/hexlify.o 00:03:13.888 LIB libspdk_ioat.a 00:03:13.888 CC lib/util/iov.o 00:03:13.888 SO libspdk_ioat.so.7.0 00:03:13.888 CC lib/util/math.o 00:03:13.888 CC lib/util/net.o 00:03:13.888 LIB libspdk_vfio_user.a 00:03:13.888 SYMLINK libspdk_ioat.so 00:03:13.888 CC lib/util/pipe.o 00:03:13.888 SO libspdk_vfio_user.so.5.0 00:03:13.888 CC lib/util/strerror_tls.o 00:03:13.888 CC lib/util/string.o 00:03:13.888 SYMLINK libspdk_vfio_user.so 00:03:13.888 CC lib/util/uuid.o 00:03:13.888 CC lib/util/xor.o 00:03:13.888 CC lib/util/zipf.o 00:03:14.455 LIB libspdk_util.a 00:03:14.455 LIB libspdk_trace_parser.a 00:03:14.455 SO libspdk_util.so.10.0 00:03:14.455 SO libspdk_trace_parser.so.5.0 00:03:14.455 SYMLINK libspdk_trace_parser.so 00:03:14.713 SYMLINK libspdk_util.so 00:03:14.713 CC lib/rdma_provider/common.o 00:03:14.713 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:14.713 CC lib/rdma_utils/rdma_utils.o 00:03:14.713 CC lib/json/json_parse.o 00:03:14.713 CC lib/conf/conf.o 00:03:14.713 CC lib/json/json_util.o 00:03:14.713 CC lib/json/json_write.o 00:03:14.713 CC lib/idxd/idxd.o 00:03:14.713 CC lib/vmd/vmd.o 00:03:14.713 CC lib/env_dpdk/env.o 00:03:14.971 CC lib/env_dpdk/memory.o 00:03:14.971 LIB libspdk_rdma_provider.a 00:03:14.972 SO libspdk_rdma_provider.so.6.0 00:03:14.972 LIB libspdk_conf.a 00:03:14.972 SO libspdk_conf.so.6.0 00:03:14.972 CC lib/idxd/idxd_user.o 00:03:14.972 CC lib/idxd/idxd_kernel.o 00:03:14.972 LIB libspdk_rdma_utils.a 00:03:14.972 LIB libspdk_json.a 00:03:14.972 SYMLINK libspdk_rdma_provider.so 00:03:14.972 CC lib/vmd/led.o 00:03:14.972 SYMLINK libspdk_conf.so 00:03:14.972 CC lib/env_dpdk/pci.o 00:03:14.972 SO libspdk_rdma_utils.so.1.0 00:03:15.231 SO libspdk_json.so.6.0 00:03:15.231 SYMLINK libspdk_rdma_utils.so 00:03:15.231 CC lib/env_dpdk/init.o 00:03:15.231 SYMLINK libspdk_json.so 00:03:15.231 CC lib/env_dpdk/threads.o 00:03:15.231 CC lib/env_dpdk/pci_ioat.o 00:03:15.231 CC lib/env_dpdk/pci_virtio.o 00:03:15.231 CC lib/env_dpdk/pci_vmd.o 00:03:15.231 CC lib/env_dpdk/pci_idxd.o 00:03:15.231 CC lib/jsonrpc/jsonrpc_server.o 00:03:15.490 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:15.490 CC lib/env_dpdk/pci_event.o 00:03:15.490 LIB libspdk_idxd.a 00:03:15.490 CC lib/env_dpdk/sigbus_handler.o 00:03:15.490 SO libspdk_idxd.so.12.0 00:03:15.490 CC lib/env_dpdk/pci_dpdk.o 00:03:15.490 LIB libspdk_vmd.a 00:03:15.490 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:15.490 SYMLINK libspdk_idxd.so 00:03:15.490 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:15.490 SO libspdk_vmd.so.6.0 00:03:15.490 CC lib/jsonrpc/jsonrpc_client.o 00:03:15.490 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:15.748 SYMLINK libspdk_vmd.so 00:03:15.748 LIB libspdk_jsonrpc.a 00:03:15.748 SO libspdk_jsonrpc.so.6.0 00:03:16.007 SYMLINK libspdk_jsonrpc.so 00:03:16.266 CC lib/rpc/rpc.o 00:03:16.266 LIB libspdk_env_dpdk.a 00:03:16.525 SO libspdk_env_dpdk.so.15.0 00:03:16.525 LIB libspdk_rpc.a 00:03:16.525 SO libspdk_rpc.so.6.0 00:03:16.525 SYMLINK libspdk_rpc.so 00:03:16.783 SYMLINK libspdk_env_dpdk.so 00:03:17.042 CC lib/keyring/keyring.o 00:03:17.042 CC lib/keyring/keyring_rpc.o 00:03:17.042 CC lib/notify/notify.o 00:03:17.042 CC lib/notify/notify_rpc.o 00:03:17.042 CC lib/trace/trace.o 00:03:17.042 CC lib/trace/trace_flags.o 00:03:17.042 CC lib/trace/trace_rpc.o 00:03:17.042 LIB libspdk_notify.a 00:03:17.042 SO libspdk_notify.so.6.0 00:03:17.301 LIB libspdk_keyring.a 00:03:17.301 SYMLINK libspdk_notify.so 00:03:17.301 LIB libspdk_trace.a 00:03:17.301 SO libspdk_keyring.so.1.0 00:03:17.301 SO libspdk_trace.so.10.0 00:03:17.301 SYMLINK libspdk_keyring.so 00:03:17.301 SYMLINK libspdk_trace.so 00:03:17.868 CC lib/thread/thread.o 00:03:17.868 CC lib/thread/iobuf.o 00:03:17.868 CC lib/sock/sock.o 00:03:17.868 CC lib/sock/sock_rpc.o 00:03:18.127 LIB libspdk_sock.a 00:03:18.127 SO libspdk_sock.so.10.0 00:03:18.386 SYMLINK libspdk_sock.so 00:03:18.645 CC lib/nvme/nvme_ctrlr.o 00:03:18.645 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:18.645 CC lib/nvme/nvme_fabric.o 00:03:18.645 CC lib/nvme/nvme_ns_cmd.o 00:03:18.645 CC lib/nvme/nvme_pcie_common.o 00:03:18.645 CC lib/nvme/nvme_ns.o 00:03:18.645 CC lib/nvme/nvme_pcie.o 00:03:18.645 CC lib/nvme/nvme_qpair.o 00:03:18.645 CC lib/nvme/nvme.o 00:03:19.212 CC lib/nvme/nvme_quirks.o 00:03:19.212 CC lib/nvme/nvme_transport.o 00:03:19.212 LIB libspdk_thread.a 00:03:19.212 CC lib/nvme/nvme_discovery.o 00:03:19.471 SO libspdk_thread.so.10.1 00:03:19.471 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:19.471 SYMLINK libspdk_thread.so 00:03:19.471 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:19.471 CC lib/nvme/nvme_tcp.o 00:03:19.471 CC lib/nvme/nvme_opal.o 00:03:19.471 CC lib/nvme/nvme_io_msg.o 00:03:19.729 CC lib/nvme/nvme_poll_group.o 00:03:19.729 CC lib/nvme/nvme_zns.o 00:03:19.729 CC lib/nvme/nvme_stubs.o 00:03:19.729 CC lib/nvme/nvme_auth.o 00:03:20.018 CC lib/nvme/nvme_cuse.o 00:03:20.018 CC lib/nvme/nvme_rdma.o 00:03:20.305 CC lib/accel/accel.o 00:03:20.305 CC lib/blob/blobstore.o 00:03:20.305 CC lib/blob/request.o 00:03:20.305 CC lib/init/json_config.o 00:03:20.305 CC lib/virtio/virtio.o 00:03:20.564 CC lib/init/subsystem.o 00:03:20.564 CC lib/blob/zeroes.o 00:03:20.822 CC lib/virtio/virtio_vhost_user.o 00:03:20.822 CC lib/blob/blob_bs_dev.o 00:03:20.822 CC lib/init/subsystem_rpc.o 00:03:20.822 CC lib/virtio/virtio_vfio_user.o 00:03:20.822 CC lib/virtio/virtio_pci.o 00:03:21.081 CC lib/init/rpc.o 00:03:21.081 CC lib/accel/accel_rpc.o 00:03:21.081 CC lib/accel/accel_sw.o 00:03:21.081 LIB libspdk_init.a 00:03:21.081 SO libspdk_init.so.5.0 00:03:21.340 SYMLINK libspdk_init.so 00:03:21.340 LIB libspdk_virtio.a 00:03:21.340 SO libspdk_virtio.so.7.0 00:03:21.340 LIB libspdk_accel.a 00:03:21.340 SYMLINK libspdk_virtio.so 00:03:21.599 SO libspdk_accel.so.16.0 00:03:21.599 LIB libspdk_nvme.a 00:03:21.599 SYMLINK libspdk_accel.so 00:03:21.599 CC lib/event/app.o 00:03:21.599 CC lib/event/log_rpc.o 00:03:21.599 CC lib/event/reactor.o 00:03:21.599 CC lib/event/app_rpc.o 00:03:21.599 CC lib/event/scheduler_static.o 00:03:21.858 SO libspdk_nvme.so.13.1 00:03:21.858 CC lib/bdev/bdev.o 00:03:21.858 CC lib/bdev/bdev_rpc.o 00:03:21.858 CC lib/bdev/scsi_nvme.o 00:03:21.858 CC lib/bdev/bdev_zone.o 00:03:21.858 CC lib/bdev/part.o 00:03:22.116 SYMLINK libspdk_nvme.so 00:03:22.116 LIB libspdk_event.a 00:03:22.116 SO libspdk_event.so.14.0 00:03:22.375 SYMLINK libspdk_event.so 00:03:23.752 LIB libspdk_blob.a 00:03:24.011 SO libspdk_blob.so.11.0 00:03:24.011 SYMLINK libspdk_blob.so 00:03:24.579 CC lib/blobfs/blobfs.o 00:03:24.579 CC lib/blobfs/tree.o 00:03:24.579 CC lib/lvol/lvol.o 00:03:24.845 LIB libspdk_bdev.a 00:03:24.846 SO libspdk_bdev.so.16.0 00:03:25.105 SYMLINK libspdk_bdev.so 00:03:25.364 CC lib/nvmf/ctrlr.o 00:03:25.364 CC lib/nvmf/ctrlr_discovery.o 00:03:25.364 CC lib/scsi/dev.o 00:03:25.364 CC lib/scsi/lun.o 00:03:25.364 CC lib/ublk/ublk.o 00:03:25.364 CC lib/nvmf/ctrlr_bdev.o 00:03:25.364 CC lib/nbd/nbd.o 00:03:25.364 CC lib/ftl/ftl_core.o 00:03:25.364 LIB libspdk_blobfs.a 00:03:25.364 SO libspdk_blobfs.so.10.0 00:03:25.364 LIB libspdk_lvol.a 00:03:25.364 SYMLINK libspdk_blobfs.so 00:03:25.364 CC lib/ftl/ftl_init.o 00:03:25.364 SO libspdk_lvol.so.10.0 00:03:25.364 CC lib/ublk/ublk_rpc.o 00:03:25.622 SYMLINK libspdk_lvol.so 00:03:25.622 CC lib/nvmf/subsystem.o 00:03:25.622 CC lib/scsi/port.o 00:03:25.622 CC lib/scsi/scsi.o 00:03:25.622 CC lib/scsi/scsi_bdev.o 00:03:25.622 CC lib/ftl/ftl_layout.o 00:03:25.622 CC lib/nvmf/nvmf.o 00:03:25.622 CC lib/nbd/nbd_rpc.o 00:03:25.880 CC lib/nvmf/nvmf_rpc.o 00:03:25.880 CC lib/scsi/scsi_pr.o 00:03:25.880 LIB libspdk_ublk.a 00:03:25.880 LIB libspdk_nbd.a 00:03:25.880 SO libspdk_ublk.so.3.0 00:03:25.880 SO libspdk_nbd.so.7.0 00:03:25.880 CC lib/ftl/ftl_debug.o 00:03:25.880 SYMLINK libspdk_nbd.so 00:03:25.880 SYMLINK libspdk_ublk.so 00:03:25.880 CC lib/ftl/ftl_io.o 00:03:25.880 CC lib/ftl/ftl_sb.o 00:03:25.880 CC lib/ftl/ftl_l2p.o 00:03:26.139 CC lib/ftl/ftl_l2p_flat.o 00:03:26.139 CC lib/scsi/scsi_rpc.o 00:03:26.139 CC lib/ftl/ftl_nv_cache.o 00:03:26.139 CC lib/nvmf/transport.o 00:03:26.139 CC lib/nvmf/tcp.o 00:03:26.139 CC lib/nvmf/stubs.o 00:03:26.397 CC lib/scsi/task.o 00:03:26.397 CC lib/ftl/ftl_band.o 00:03:26.397 LIB libspdk_scsi.a 00:03:26.656 CC lib/nvmf/mdns_server.o 00:03:26.656 SO libspdk_scsi.so.9.0 00:03:26.656 CC lib/nvmf/rdma.o 00:03:26.656 CC lib/nvmf/auth.o 00:03:26.656 SYMLINK libspdk_scsi.so 00:03:26.656 CC lib/ftl/ftl_band_ops.o 00:03:26.915 CC lib/iscsi/conn.o 00:03:26.915 CC lib/iscsi/init_grp.o 00:03:26.915 CC lib/vhost/vhost.o 00:03:26.915 CC lib/iscsi/iscsi.o 00:03:27.174 CC lib/iscsi/md5.o 00:03:27.174 CC lib/vhost/vhost_rpc.o 00:03:27.174 CC lib/ftl/ftl_writer.o 00:03:27.174 CC lib/iscsi/param.o 00:03:27.174 CC lib/iscsi/portal_grp.o 00:03:27.433 CC lib/ftl/ftl_rq.o 00:03:27.433 CC lib/iscsi/tgt_node.o 00:03:27.433 CC lib/vhost/vhost_scsi.o 00:03:27.434 CC lib/vhost/vhost_blk.o 00:03:27.434 CC lib/vhost/rte_vhost_user.o 00:03:27.693 CC lib/ftl/ftl_reloc.o 00:03:27.693 CC lib/iscsi/iscsi_subsystem.o 00:03:27.693 CC lib/iscsi/iscsi_rpc.o 00:03:27.951 CC lib/iscsi/task.o 00:03:27.951 CC lib/ftl/ftl_l2p_cache.o 00:03:27.951 CC lib/ftl/ftl_p2l.o 00:03:27.951 CC lib/ftl/mngt/ftl_mngt.o 00:03:28.210 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:28.210 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:28.210 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:28.468 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:28.468 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:28.468 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:28.468 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:28.468 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:28.468 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:28.468 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:28.468 LIB libspdk_iscsi.a 00:03:28.468 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:28.726 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:28.726 CC lib/ftl/utils/ftl_conf.o 00:03:28.726 SO libspdk_iscsi.so.8.0 00:03:28.726 LIB libspdk_vhost.a 00:03:28.726 CC lib/ftl/utils/ftl_md.o 00:03:28.726 CC lib/ftl/utils/ftl_mempool.o 00:03:28.726 CC lib/ftl/utils/ftl_bitmap.o 00:03:28.726 SO libspdk_vhost.so.8.0 00:03:28.726 CC lib/ftl/utils/ftl_property.o 00:03:28.726 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:28.726 SYMLINK libspdk_iscsi.so 00:03:28.726 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:28.726 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:28.984 SYMLINK libspdk_vhost.so 00:03:28.984 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:28.984 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:28.984 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:28.984 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:28.984 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:28.984 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:28.984 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:28.984 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:28.984 CC lib/ftl/base/ftl_base_dev.o 00:03:29.243 CC lib/ftl/base/ftl_base_bdev.o 00:03:29.243 CC lib/ftl/ftl_trace.o 00:03:29.243 LIB libspdk_nvmf.a 00:03:29.243 SO libspdk_nvmf.so.19.0 00:03:29.501 LIB libspdk_ftl.a 00:03:29.501 SYMLINK libspdk_nvmf.so 00:03:29.760 SO libspdk_ftl.so.9.0 00:03:30.018 SYMLINK libspdk_ftl.so 00:03:30.584 CC module/env_dpdk/env_dpdk_rpc.o 00:03:30.585 CC module/accel/error/accel_error.o 00:03:30.585 CC module/keyring/file/keyring.o 00:03:30.585 CC module/accel/iaa/accel_iaa.o 00:03:30.585 CC module/accel/dsa/accel_dsa.o 00:03:30.585 CC module/accel/ioat/accel_ioat.o 00:03:30.585 CC module/keyring/linux/keyring.o 00:03:30.585 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:30.585 CC module/blob/bdev/blob_bdev.o 00:03:30.585 CC module/sock/posix/posix.o 00:03:30.585 LIB libspdk_env_dpdk_rpc.a 00:03:30.585 SO libspdk_env_dpdk_rpc.so.6.0 00:03:30.585 CC module/keyring/linux/keyring_rpc.o 00:03:30.585 CC module/keyring/file/keyring_rpc.o 00:03:30.585 SYMLINK libspdk_env_dpdk_rpc.so 00:03:30.585 CC module/accel/error/accel_error_rpc.o 00:03:30.844 LIB libspdk_scheduler_dynamic.a 00:03:30.844 CC module/accel/iaa/accel_iaa_rpc.o 00:03:30.844 CC module/accel/ioat/accel_ioat_rpc.o 00:03:30.844 SO libspdk_scheduler_dynamic.so.4.0 00:03:30.844 LIB libspdk_keyring_file.a 00:03:30.844 CC module/accel/dsa/accel_dsa_rpc.o 00:03:30.844 LIB libspdk_blob_bdev.a 00:03:30.844 LIB libspdk_keyring_linux.a 00:03:30.844 SYMLINK libspdk_scheduler_dynamic.so 00:03:30.844 LIB libspdk_accel_error.a 00:03:30.844 SO libspdk_blob_bdev.so.11.0 00:03:30.844 SO libspdk_keyring_file.so.1.0 00:03:30.844 LIB libspdk_accel_iaa.a 00:03:30.844 SO libspdk_keyring_linux.so.1.0 00:03:30.844 LIB libspdk_accel_ioat.a 00:03:30.844 SO libspdk_accel_error.so.2.0 00:03:30.844 SO libspdk_accel_iaa.so.3.0 00:03:30.844 SO libspdk_accel_ioat.so.6.0 00:03:30.844 SYMLINK libspdk_blob_bdev.so 00:03:30.844 SYMLINK libspdk_keyring_file.so 00:03:30.844 SYMLINK libspdk_keyring_linux.so 00:03:30.844 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:30.844 SYMLINK libspdk_accel_error.so 00:03:30.844 LIB libspdk_accel_dsa.a 00:03:30.844 SYMLINK libspdk_accel_ioat.so 00:03:30.844 SYMLINK libspdk_accel_iaa.so 00:03:31.103 SO libspdk_accel_dsa.so.5.0 00:03:31.103 CC module/scheduler/gscheduler/gscheduler.o 00:03:31.103 SYMLINK libspdk_accel_dsa.so 00:03:31.103 LIB libspdk_scheduler_dpdk_governor.a 00:03:31.103 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:31.103 LIB libspdk_scheduler_gscheduler.a 00:03:31.103 CC module/bdev/lvol/vbdev_lvol.o 00:03:31.103 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:31.103 SO libspdk_scheduler_gscheduler.so.4.0 00:03:31.103 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:31.103 CC module/bdev/delay/vbdev_delay.o 00:03:31.103 CC module/bdev/error/vbdev_error.o 00:03:31.103 CC module/bdev/malloc/bdev_malloc.o 00:03:31.103 CC module/bdev/gpt/gpt.o 00:03:31.103 CC module/blobfs/bdev/blobfs_bdev.o 00:03:31.103 CC module/bdev/null/bdev_null.o 00:03:31.362 SYMLINK libspdk_scheduler_gscheduler.so 00:03:31.362 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:31.362 LIB libspdk_sock_posix.a 00:03:31.362 SO libspdk_sock_posix.so.6.0 00:03:31.362 CC module/bdev/gpt/vbdev_gpt.o 00:03:31.362 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:31.362 LIB libspdk_blobfs_bdev.a 00:03:31.362 SYMLINK libspdk_sock_posix.so 00:03:31.362 CC module/bdev/error/vbdev_error_rpc.o 00:03:31.362 SO libspdk_blobfs_bdev.so.6.0 00:03:31.362 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:31.621 CC module/bdev/null/bdev_null_rpc.o 00:03:31.621 SYMLINK libspdk_blobfs_bdev.so 00:03:31.621 LIB libspdk_bdev_error.a 00:03:31.621 LIB libspdk_bdev_malloc.a 00:03:31.621 SO libspdk_bdev_error.so.6.0 00:03:31.621 SO libspdk_bdev_malloc.so.6.0 00:03:31.621 LIB libspdk_bdev_delay.a 00:03:31.621 CC module/bdev/nvme/bdev_nvme.o 00:03:31.621 LIB libspdk_bdev_null.a 00:03:31.621 LIB libspdk_bdev_gpt.a 00:03:31.621 SO libspdk_bdev_delay.so.6.0 00:03:31.621 SYMLINK libspdk_bdev_error.so 00:03:31.621 LIB libspdk_bdev_lvol.a 00:03:31.621 SO libspdk_bdev_gpt.so.6.0 00:03:31.621 SO libspdk_bdev_null.so.6.0 00:03:31.621 SYMLINK libspdk_bdev_malloc.so 00:03:31.621 SYMLINK libspdk_bdev_delay.so 00:03:31.621 SO libspdk_bdev_lvol.so.6.0 00:03:31.880 CC module/bdev/raid/bdev_raid.o 00:03:31.880 CC module/bdev/passthru/vbdev_passthru.o 00:03:31.880 SYMLINK libspdk_bdev_null.so 00:03:31.880 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:31.880 SYMLINK libspdk_bdev_gpt.so 00:03:31.880 SYMLINK libspdk_bdev_lvol.so 00:03:31.880 CC module/bdev/split/vbdev_split.o 00:03:31.880 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:31.880 CC module/bdev/xnvme/bdev_xnvme.o 00:03:31.880 CC module/bdev/aio/bdev_aio.o 00:03:31.880 CC module/bdev/aio/bdev_aio_rpc.o 00:03:31.880 CC module/bdev/ftl/bdev_ftl.o 00:03:31.880 CC module/bdev/iscsi/bdev_iscsi.o 00:03:32.137 LIB libspdk_bdev_passthru.a 00:03:32.137 CC module/bdev/split/vbdev_split_rpc.o 00:03:32.137 SO libspdk_bdev_passthru.so.6.0 00:03:32.137 CC module/bdev/raid/bdev_raid_rpc.o 00:03:32.137 SYMLINK libspdk_bdev_passthru.so 00:03:32.137 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:32.137 CC module/bdev/raid/bdev_raid_sb.o 00:03:32.137 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:32.137 LIB libspdk_bdev_split.a 00:03:32.137 SO libspdk_bdev_split.so.6.0 00:03:32.137 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:32.395 LIB libspdk_bdev_aio.a 00:03:32.395 SO libspdk_bdev_aio.so.6.0 00:03:32.395 LIB libspdk_bdev_xnvme.a 00:03:32.395 CC module/bdev/raid/raid0.o 00:03:32.395 SYMLINK libspdk_bdev_split.so 00:03:32.395 CC module/bdev/raid/raid1.o 00:03:32.395 SO libspdk_bdev_xnvme.so.3.0 00:03:32.395 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:32.395 LIB libspdk_bdev_zone_block.a 00:03:32.395 SYMLINK libspdk_bdev_aio.so 00:03:32.395 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:32.395 SO libspdk_bdev_zone_block.so.6.0 00:03:32.395 SYMLINK libspdk_bdev_xnvme.so 00:03:32.395 CC module/bdev/raid/concat.o 00:03:32.395 CC module/bdev/nvme/nvme_rpc.o 00:03:32.395 SYMLINK libspdk_bdev_zone_block.so 00:03:32.395 LIB libspdk_bdev_ftl.a 00:03:32.395 LIB libspdk_bdev_iscsi.a 00:03:32.395 SO libspdk_bdev_ftl.so.6.0 00:03:32.653 SO libspdk_bdev_iscsi.so.6.0 00:03:32.653 CC module/bdev/nvme/bdev_mdns_client.o 00:03:32.653 SYMLINK libspdk_bdev_ftl.so 00:03:32.653 CC module/bdev/nvme/vbdev_opal.o 00:03:32.653 SYMLINK libspdk_bdev_iscsi.so 00:03:32.653 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:32.653 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:32.653 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:32.653 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:32.653 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:32.910 LIB libspdk_bdev_raid.a 00:03:32.910 SO libspdk_bdev_raid.so.6.0 00:03:33.167 SYMLINK libspdk_bdev_raid.so 00:03:33.167 LIB libspdk_bdev_virtio.a 00:03:33.167 SO libspdk_bdev_virtio.so.6.0 00:03:33.425 SYMLINK libspdk_bdev_virtio.so 00:03:34.358 LIB libspdk_bdev_nvme.a 00:03:34.358 SO libspdk_bdev_nvme.so.7.0 00:03:34.358 SYMLINK libspdk_bdev_nvme.so 00:03:34.924 CC module/event/subsystems/iobuf/iobuf.o 00:03:34.924 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:34.924 CC module/event/subsystems/scheduler/scheduler.o 00:03:34.924 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:34.924 CC module/event/subsystems/vmd/vmd.o 00:03:34.924 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:34.924 CC module/event/subsystems/sock/sock.o 00:03:34.924 CC module/event/subsystems/keyring/keyring.o 00:03:35.182 LIB libspdk_event_vhost_blk.a 00:03:35.182 LIB libspdk_event_sock.a 00:03:35.182 LIB libspdk_event_keyring.a 00:03:35.182 LIB libspdk_event_scheduler.a 00:03:35.182 LIB libspdk_event_vmd.a 00:03:35.182 LIB libspdk_event_iobuf.a 00:03:35.182 SO libspdk_event_vhost_blk.so.3.0 00:03:35.182 SO libspdk_event_scheduler.so.4.0 00:03:35.182 SO libspdk_event_keyring.so.1.0 00:03:35.182 SO libspdk_event_sock.so.5.0 00:03:35.182 SO libspdk_event_vmd.so.6.0 00:03:35.182 SO libspdk_event_iobuf.so.3.0 00:03:35.182 SYMLINK libspdk_event_vhost_blk.so 00:03:35.182 SYMLINK libspdk_event_scheduler.so 00:03:35.182 SYMLINK libspdk_event_keyring.so 00:03:35.182 SYMLINK libspdk_event_sock.so 00:03:35.182 SYMLINK libspdk_event_vmd.so 00:03:35.182 SYMLINK libspdk_event_iobuf.so 00:03:35.759 CC module/event/subsystems/accel/accel.o 00:03:35.759 LIB libspdk_event_accel.a 00:03:35.759 SO libspdk_event_accel.so.6.0 00:03:36.016 SYMLINK libspdk_event_accel.so 00:03:36.273 CC module/event/subsystems/bdev/bdev.o 00:03:36.530 LIB libspdk_event_bdev.a 00:03:36.530 SO libspdk_event_bdev.so.6.0 00:03:36.530 SYMLINK libspdk_event_bdev.so 00:03:37.095 CC module/event/subsystems/nbd/nbd.o 00:03:37.095 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:37.095 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:37.095 CC module/event/subsystems/scsi/scsi.o 00:03:37.095 CC module/event/subsystems/ublk/ublk.o 00:03:37.095 LIB libspdk_event_nbd.a 00:03:37.095 SO libspdk_event_nbd.so.6.0 00:03:37.095 LIB libspdk_event_ublk.a 00:03:37.095 LIB libspdk_event_scsi.a 00:03:37.095 SYMLINK libspdk_event_nbd.so 00:03:37.095 LIB libspdk_event_nvmf.a 00:03:37.095 SO libspdk_event_ublk.so.3.0 00:03:37.095 SO libspdk_event_scsi.so.6.0 00:03:37.353 SO libspdk_event_nvmf.so.6.0 00:03:37.353 SYMLINK libspdk_event_ublk.so 00:03:37.353 SYMLINK libspdk_event_scsi.so 00:03:37.353 SYMLINK libspdk_event_nvmf.so 00:03:37.612 CC module/event/subsystems/iscsi/iscsi.o 00:03:37.612 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:37.870 LIB libspdk_event_vhost_scsi.a 00:03:37.870 LIB libspdk_event_iscsi.a 00:03:37.870 SO libspdk_event_vhost_scsi.so.3.0 00:03:37.870 SO libspdk_event_iscsi.so.6.0 00:03:37.870 SYMLINK libspdk_event_iscsi.so 00:03:37.870 SYMLINK libspdk_event_vhost_scsi.so 00:03:38.129 SO libspdk.so.6.0 00:03:38.129 SYMLINK libspdk.so 00:03:38.387 CXX app/trace/trace.o 00:03:38.387 CC app/trace_record/trace_record.o 00:03:38.387 CC test/rpc_client/rpc_client_test.o 00:03:38.387 TEST_HEADER include/spdk/accel.h 00:03:38.387 TEST_HEADER include/spdk/accel_module.h 00:03:38.387 TEST_HEADER include/spdk/assert.h 00:03:38.387 TEST_HEADER include/spdk/barrier.h 00:03:38.387 TEST_HEADER include/spdk/base64.h 00:03:38.387 TEST_HEADER include/spdk/bdev.h 00:03:38.387 TEST_HEADER include/spdk/bdev_module.h 00:03:38.387 TEST_HEADER include/spdk/bdev_zone.h 00:03:38.387 TEST_HEADER include/spdk/bit_array.h 00:03:38.387 CC app/nvmf_tgt/nvmf_main.o 00:03:38.387 TEST_HEADER include/spdk/bit_pool.h 00:03:38.387 TEST_HEADER include/spdk/blob_bdev.h 00:03:38.387 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:38.387 TEST_HEADER include/spdk/blobfs.h 00:03:38.387 TEST_HEADER include/spdk/blob.h 00:03:38.387 TEST_HEADER include/spdk/conf.h 00:03:38.387 TEST_HEADER include/spdk/config.h 00:03:38.387 TEST_HEADER include/spdk/cpuset.h 00:03:38.387 TEST_HEADER include/spdk/crc16.h 00:03:38.387 TEST_HEADER include/spdk/crc32.h 00:03:38.387 TEST_HEADER include/spdk/crc64.h 00:03:38.387 TEST_HEADER include/spdk/dif.h 00:03:38.387 TEST_HEADER include/spdk/dma.h 00:03:38.387 TEST_HEADER include/spdk/endian.h 00:03:38.387 TEST_HEADER include/spdk/env_dpdk.h 00:03:38.387 TEST_HEADER include/spdk/env.h 00:03:38.387 TEST_HEADER include/spdk/event.h 00:03:38.387 TEST_HEADER include/spdk/fd_group.h 00:03:38.387 TEST_HEADER include/spdk/fd.h 00:03:38.387 TEST_HEADER include/spdk/file.h 00:03:38.387 TEST_HEADER include/spdk/ftl.h 00:03:38.387 CC examples/util/zipf/zipf.o 00:03:38.387 TEST_HEADER include/spdk/gpt_spec.h 00:03:38.387 TEST_HEADER include/spdk/hexlify.h 00:03:38.387 TEST_HEADER include/spdk/histogram_data.h 00:03:38.388 TEST_HEADER include/spdk/idxd.h 00:03:38.646 TEST_HEADER include/spdk/idxd_spec.h 00:03:38.646 TEST_HEADER include/spdk/init.h 00:03:38.646 TEST_HEADER include/spdk/ioat.h 00:03:38.646 TEST_HEADER include/spdk/ioat_spec.h 00:03:38.646 CC test/thread/poller_perf/poller_perf.o 00:03:38.646 TEST_HEADER include/spdk/iscsi_spec.h 00:03:38.646 TEST_HEADER include/spdk/json.h 00:03:38.646 TEST_HEADER include/spdk/jsonrpc.h 00:03:38.646 TEST_HEADER include/spdk/keyring.h 00:03:38.646 TEST_HEADER include/spdk/keyring_module.h 00:03:38.646 TEST_HEADER include/spdk/likely.h 00:03:38.646 CC test/app/bdev_svc/bdev_svc.o 00:03:38.646 TEST_HEADER include/spdk/log.h 00:03:38.646 TEST_HEADER include/spdk/lvol.h 00:03:38.646 TEST_HEADER include/spdk/memory.h 00:03:38.646 TEST_HEADER include/spdk/mmio.h 00:03:38.646 TEST_HEADER include/spdk/nbd.h 00:03:38.646 TEST_HEADER include/spdk/net.h 00:03:38.646 CC test/dma/test_dma/test_dma.o 00:03:38.646 TEST_HEADER include/spdk/notify.h 00:03:38.646 TEST_HEADER include/spdk/nvme.h 00:03:38.646 TEST_HEADER include/spdk/nvme_intel.h 00:03:38.646 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:38.646 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:38.646 TEST_HEADER include/spdk/nvme_spec.h 00:03:38.646 TEST_HEADER include/spdk/nvme_zns.h 00:03:38.646 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:38.646 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:38.646 TEST_HEADER include/spdk/nvmf.h 00:03:38.646 TEST_HEADER include/spdk/nvmf_spec.h 00:03:38.646 TEST_HEADER include/spdk/nvmf_transport.h 00:03:38.646 TEST_HEADER include/spdk/opal.h 00:03:38.646 TEST_HEADER include/spdk/opal_spec.h 00:03:38.646 TEST_HEADER include/spdk/pci_ids.h 00:03:38.646 TEST_HEADER include/spdk/pipe.h 00:03:38.646 TEST_HEADER include/spdk/queue.h 00:03:38.646 TEST_HEADER include/spdk/reduce.h 00:03:38.646 TEST_HEADER include/spdk/rpc.h 00:03:38.646 TEST_HEADER include/spdk/scheduler.h 00:03:38.646 TEST_HEADER include/spdk/scsi.h 00:03:38.646 TEST_HEADER include/spdk/scsi_spec.h 00:03:38.646 TEST_HEADER include/spdk/sock.h 00:03:38.646 TEST_HEADER include/spdk/stdinc.h 00:03:38.646 TEST_HEADER include/spdk/string.h 00:03:38.646 TEST_HEADER include/spdk/thread.h 00:03:38.646 TEST_HEADER include/spdk/trace.h 00:03:38.646 LINK rpc_client_test 00:03:38.646 TEST_HEADER include/spdk/trace_parser.h 00:03:38.646 CC test/env/mem_callbacks/mem_callbacks.o 00:03:38.646 TEST_HEADER include/spdk/tree.h 00:03:38.646 TEST_HEADER include/spdk/ublk.h 00:03:38.646 TEST_HEADER include/spdk/util.h 00:03:38.646 TEST_HEADER include/spdk/uuid.h 00:03:38.646 TEST_HEADER include/spdk/version.h 00:03:38.646 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:38.646 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:38.646 TEST_HEADER include/spdk/vhost.h 00:03:38.646 TEST_HEADER include/spdk/vmd.h 00:03:38.646 TEST_HEADER include/spdk/xor.h 00:03:38.646 LINK nvmf_tgt 00:03:38.646 TEST_HEADER include/spdk/zipf.h 00:03:38.646 CXX test/cpp_headers/accel.o 00:03:38.646 LINK zipf 00:03:38.646 LINK poller_perf 00:03:38.646 LINK spdk_trace_record 00:03:38.646 LINK bdev_svc 00:03:38.905 CXX test/cpp_headers/accel_module.o 00:03:38.905 LINK spdk_trace 00:03:38.905 CC app/iscsi_tgt/iscsi_tgt.o 00:03:38.905 LINK test_dma 00:03:38.905 CC test/app/histogram_perf/histogram_perf.o 00:03:38.905 CXX test/cpp_headers/assert.o 00:03:38.905 CC examples/ioat/perf/perf.o 00:03:38.905 CC app/spdk_tgt/spdk_tgt.o 00:03:38.905 CC app/spdk_lspci/spdk_lspci.o 00:03:38.905 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:39.163 CC test/app/jsoncat/jsoncat.o 00:03:39.163 LINK iscsi_tgt 00:03:39.163 LINK histogram_perf 00:03:39.163 CXX test/cpp_headers/barrier.o 00:03:39.163 LINK mem_callbacks 00:03:39.163 LINK spdk_lspci 00:03:39.163 LINK ioat_perf 00:03:39.163 LINK spdk_tgt 00:03:39.163 LINK jsoncat 00:03:39.163 CXX test/cpp_headers/base64.o 00:03:39.163 CC test/app/stub/stub.o 00:03:39.422 CC test/env/vtophys/vtophys.o 00:03:39.422 CC examples/ioat/verify/verify.o 00:03:39.422 CXX test/cpp_headers/bdev.o 00:03:39.422 CC examples/vmd/lsvmd/lsvmd.o 00:03:39.422 CC examples/vmd/led/led.o 00:03:39.422 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:39.422 LINK stub 00:03:39.422 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:39.422 LINK nvme_fuzz 00:03:39.422 CC app/spdk_nvme_perf/perf.o 00:03:39.422 LINK vtophys 00:03:39.422 LINK lsvmd 00:03:39.682 CXX test/cpp_headers/bdev_module.o 00:03:39.682 LINK led 00:03:39.682 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:39.682 LINK verify 00:03:39.682 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:39.682 CXX test/cpp_headers/bdev_zone.o 00:03:39.682 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:39.682 CC examples/idxd/perf/perf.o 00:03:39.682 CC test/env/memory/memory_ut.o 00:03:39.964 LINK env_dpdk_post_init 00:03:39.964 CXX test/cpp_headers/bit_array.o 00:03:39.964 LINK interrupt_tgt 00:03:39.964 CC examples/thread/thread/thread_ex.o 00:03:39.964 LINK vhost_fuzz 00:03:39.964 CC test/event/event_perf/event_perf.o 00:03:39.964 CXX test/cpp_headers/bit_pool.o 00:03:40.223 CC test/env/pci/pci_ut.o 00:03:40.223 LINK idxd_perf 00:03:40.223 CXX test/cpp_headers/blob_bdev.o 00:03:40.223 LINK event_perf 00:03:40.223 LINK thread 00:03:40.223 CC examples/sock/hello_world/hello_sock.o 00:03:40.223 CXX test/cpp_headers/blobfs_bdev.o 00:03:40.223 CC test/event/reactor/reactor.o 00:03:40.223 LINK spdk_nvme_perf 00:03:40.483 CC test/event/reactor_perf/reactor_perf.o 00:03:40.483 CXX test/cpp_headers/blobfs.o 00:03:40.483 LINK reactor 00:03:40.483 CC test/nvme/aer/aer.o 00:03:40.483 CC test/nvme/reset/reset.o 00:03:40.483 LINK reactor_perf 00:03:40.483 LINK hello_sock 00:03:40.483 LINK pci_ut 00:03:40.483 CC app/spdk_nvme_identify/identify.o 00:03:40.742 CXX test/cpp_headers/blob.o 00:03:40.742 CC app/spdk_nvme_discover/discovery_aer.o 00:03:40.742 LINK aer 00:03:40.742 LINK reset 00:03:40.742 CC test/event/app_repeat/app_repeat.o 00:03:40.742 CXX test/cpp_headers/conf.o 00:03:40.742 CXX test/cpp_headers/config.o 00:03:40.742 LINK memory_ut 00:03:41.001 CC examples/accel/perf/accel_perf.o 00:03:41.001 LINK spdk_nvme_discover 00:03:41.001 CXX test/cpp_headers/cpuset.o 00:03:41.001 LINK app_repeat 00:03:41.001 CC test/event/scheduler/scheduler.o 00:03:41.001 CXX test/cpp_headers/crc16.o 00:03:41.001 CC test/nvme/sgl/sgl.o 00:03:41.001 CXX test/cpp_headers/crc32.o 00:03:41.261 CC app/spdk_top/spdk_top.o 00:03:41.261 LINK scheduler 00:03:41.261 CC test/nvme/e2edp/nvme_dp.o 00:03:41.261 CXX test/cpp_headers/crc64.o 00:03:41.261 CC examples/blob/hello_world/hello_blob.o 00:03:41.261 CC test/nvme/overhead/overhead.o 00:03:41.261 LINK iscsi_fuzz 00:03:41.261 LINK sgl 00:03:41.261 CXX test/cpp_headers/dif.o 00:03:41.520 LINK accel_perf 00:03:41.520 LINK spdk_nvme_identify 00:03:41.520 LINK hello_blob 00:03:41.521 LINK nvme_dp 00:03:41.521 CC app/vhost/vhost.o 00:03:41.521 CXX test/cpp_headers/dma.o 00:03:41.521 LINK overhead 00:03:41.521 CC app/spdk_dd/spdk_dd.o 00:03:41.780 CXX test/cpp_headers/endian.o 00:03:41.780 CC test/nvme/err_injection/err_injection.o 00:03:41.780 LINK vhost 00:03:41.780 CC test/nvme/startup/startup.o 00:03:41.780 CC app/fio/nvme/fio_plugin.o 00:03:41.780 CC examples/blob/cli/blobcli.o 00:03:41.780 CC app/fio/bdev/fio_plugin.o 00:03:41.780 CC test/nvme/reserve/reserve.o 00:03:41.780 CXX test/cpp_headers/env_dpdk.o 00:03:42.078 LINK err_injection 00:03:42.078 CXX test/cpp_headers/env.o 00:03:42.078 LINK startup 00:03:42.078 LINK spdk_dd 00:03:42.078 CXX test/cpp_headers/event.o 00:03:42.078 LINK reserve 00:03:42.078 CC test/nvme/simple_copy/simple_copy.o 00:03:42.078 LINK spdk_top 00:03:42.078 CC test/nvme/connect_stress/connect_stress.o 00:03:42.338 CC test/nvme/boot_partition/boot_partition.o 00:03:42.338 CXX test/cpp_headers/fd_group.o 00:03:42.338 LINK spdk_bdev 00:03:42.338 CC test/nvme/compliance/nvme_compliance.o 00:03:42.338 LINK spdk_nvme 00:03:42.338 LINK blobcli 00:03:42.338 LINK connect_stress 00:03:42.338 CC test/nvme/fused_ordering/fused_ordering.o 00:03:42.338 LINK simple_copy 00:03:42.338 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:42.338 LINK boot_partition 00:03:42.338 CXX test/cpp_headers/fd.o 00:03:42.338 CXX test/cpp_headers/file.o 00:03:42.338 CXX test/cpp_headers/ftl.o 00:03:42.597 CXX test/cpp_headers/gpt_spec.o 00:03:42.597 LINK fused_ordering 00:03:42.597 LINK doorbell_aers 00:03:42.597 CXX test/cpp_headers/hexlify.o 00:03:42.597 CC test/nvme/cuse/cuse.o 00:03:42.597 CC test/nvme/fdp/fdp.o 00:03:42.597 LINK nvme_compliance 00:03:42.856 CC examples/nvme/hello_world/hello_world.o 00:03:42.856 CC examples/bdev/hello_world/hello_bdev.o 00:03:42.856 CXX test/cpp_headers/histogram_data.o 00:03:42.856 CC test/accel/dif/dif.o 00:03:42.856 CC examples/nvme/reconnect/reconnect.o 00:03:42.856 CC test/blobfs/mkfs/mkfs.o 00:03:42.856 CXX test/cpp_headers/idxd.o 00:03:42.856 CC examples/bdev/bdevperf/bdevperf.o 00:03:42.856 LINK hello_world 00:03:43.115 LINK fdp 00:03:43.116 CC test/lvol/esnap/esnap.o 00:03:43.116 LINK hello_bdev 00:03:43.116 LINK mkfs 00:03:43.116 CXX test/cpp_headers/idxd_spec.o 00:03:43.116 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:43.116 CXX test/cpp_headers/init.o 00:03:43.116 CXX test/cpp_headers/ioat.o 00:03:43.375 LINK reconnect 00:03:43.375 LINK dif 00:03:43.375 CXX test/cpp_headers/ioat_spec.o 00:03:43.375 CXX test/cpp_headers/iscsi_spec.o 00:03:43.375 CC examples/nvme/arbitration/arbitration.o 00:03:43.375 CC examples/nvme/hotplug/hotplug.o 00:03:43.375 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:43.634 CXX test/cpp_headers/json.o 00:03:43.634 CC examples/nvme/abort/abort.o 00:03:43.634 LINK hotplug 00:03:43.634 LINK cmb_copy 00:03:43.634 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:43.634 CXX test/cpp_headers/jsonrpc.o 00:03:43.634 LINK arbitration 00:03:43.893 LINK nvme_manage 00:03:43.893 LINK bdevperf 00:03:43.893 CXX test/cpp_headers/keyring.o 00:03:43.893 LINK pmr_persistence 00:03:43.893 CXX test/cpp_headers/keyring_module.o 00:03:43.893 CXX test/cpp_headers/likely.o 00:03:43.893 LINK cuse 00:03:43.893 CXX test/cpp_headers/log.o 00:03:43.893 CXX test/cpp_headers/lvol.o 00:03:43.893 LINK abort 00:03:43.893 CC test/bdev/bdevio/bdevio.o 00:03:43.893 CXX test/cpp_headers/memory.o 00:03:43.893 CXX test/cpp_headers/mmio.o 00:03:44.152 CXX test/cpp_headers/nbd.o 00:03:44.152 CXX test/cpp_headers/net.o 00:03:44.152 CXX test/cpp_headers/notify.o 00:03:44.152 CXX test/cpp_headers/nvme.o 00:03:44.152 CXX test/cpp_headers/nvme_intel.o 00:03:44.152 CXX test/cpp_headers/nvme_ocssd.o 00:03:44.152 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:44.152 CXX test/cpp_headers/nvme_spec.o 00:03:44.152 CXX test/cpp_headers/nvme_zns.o 00:03:44.152 CXX test/cpp_headers/nvmf_cmd.o 00:03:44.152 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:44.152 CXX test/cpp_headers/nvmf.o 00:03:44.411 CXX test/cpp_headers/nvmf_spec.o 00:03:44.411 CXX test/cpp_headers/nvmf_transport.o 00:03:44.411 CXX test/cpp_headers/opal.o 00:03:44.411 CXX test/cpp_headers/opal_spec.o 00:03:44.411 CC examples/nvmf/nvmf/nvmf.o 00:03:44.411 CXX test/cpp_headers/pci_ids.o 00:03:44.411 CXX test/cpp_headers/pipe.o 00:03:44.411 CXX test/cpp_headers/queue.o 00:03:44.411 LINK bdevio 00:03:44.411 CXX test/cpp_headers/reduce.o 00:03:44.411 CXX test/cpp_headers/rpc.o 00:03:44.411 CXX test/cpp_headers/scheduler.o 00:03:44.670 CXX test/cpp_headers/scsi.o 00:03:44.670 CXX test/cpp_headers/scsi_spec.o 00:03:44.670 CXX test/cpp_headers/sock.o 00:03:44.670 CXX test/cpp_headers/stdinc.o 00:03:44.670 CXX test/cpp_headers/string.o 00:03:44.670 CXX test/cpp_headers/thread.o 00:03:44.670 CXX test/cpp_headers/trace.o 00:03:44.670 CXX test/cpp_headers/trace_parser.o 00:03:44.670 CXX test/cpp_headers/tree.o 00:03:44.670 LINK nvmf 00:03:44.670 CXX test/cpp_headers/ublk.o 00:03:44.670 CXX test/cpp_headers/util.o 00:03:44.670 CXX test/cpp_headers/uuid.o 00:03:44.670 CXX test/cpp_headers/version.o 00:03:44.929 CXX test/cpp_headers/vfio_user_pci.o 00:03:44.929 CXX test/cpp_headers/vfio_user_spec.o 00:03:44.929 CXX test/cpp_headers/vhost.o 00:03:44.929 CXX test/cpp_headers/vmd.o 00:03:44.929 CXX test/cpp_headers/xor.o 00:03:44.929 CXX test/cpp_headers/zipf.o 00:03:49.119 LINK esnap 00:03:49.119 ************************************ 00:03:49.119 END TEST make 00:03:49.119 ************************************ 00:03:49.119 00:03:49.119 real 1m6.312s 00:03:49.119 user 5m46.622s 00:03:49.119 sys 1m42.525s 00:03:49.119 11:57:36 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:49.119 11:57:36 make -- common/autotest_common.sh@10 -- $ set +x 00:03:49.119 11:57:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:49.119 11:57:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:49.119 11:57:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:49.119 11:57:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.119 11:57:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:49.119 11:57:36 -- pm/common@44 -- $ pid=5174 00:03:49.119 11:57:36 -- pm/common@50 -- $ kill -TERM 5174 00:03:49.119 11:57:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.119 11:57:36 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:49.119 11:57:36 -- pm/common@44 -- $ pid=5176 00:03:49.119 11:57:36 -- pm/common@50 -- $ kill -TERM 5176 00:03:49.119 11:57:37 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:49.119 11:57:37 -- nvmf/common.sh@7 -- # uname -s 00:03:49.119 11:57:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:49.119 11:57:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:49.119 11:57:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:49.119 11:57:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:49.119 11:57:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:49.119 11:57:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:49.119 11:57:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:49.119 11:57:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:49.119 11:57:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:49.119 11:57:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:49.119 11:57:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:681cafa1-0731-46ac-b02a-daaaadf83aad 00:03:49.119 11:57:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=681cafa1-0731-46ac-b02a-daaaadf83aad 00:03:49.119 11:57:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:49.119 11:57:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:49.119 11:57:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:49.119 11:57:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:49.119 11:57:37 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:49.119 11:57:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:49.119 11:57:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:49.119 11:57:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:49.119 11:57:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.119 11:57:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.119 11:57:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.119 11:57:37 -- paths/export.sh@5 -- # export PATH 00:03:49.119 11:57:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.119 11:57:37 -- nvmf/common.sh@47 -- # : 0 00:03:49.119 11:57:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:49.119 11:57:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:49.119 11:57:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:49.119 11:57:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:49.119 11:57:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:49.119 11:57:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:49.119 11:57:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:49.119 11:57:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:49.378 11:57:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:49.378 11:57:37 -- spdk/autotest.sh@32 -- # uname -s 00:03:49.378 11:57:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:49.378 11:57:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:49.378 11:57:37 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:49.378 11:57:37 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:49.378 11:57:37 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:49.378 11:57:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:49.378 11:57:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:49.378 11:57:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:49.378 11:57:37 -- spdk/autotest.sh@48 -- # udevadm_pid=53628 00:03:49.378 11:57:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:49.378 11:57:37 -- pm/common@17 -- # local monitor 00:03:49.378 11:57:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.378 11:57:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:49.378 11:57:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.378 11:57:37 -- pm/common@21 -- # date +%s 00:03:49.378 11:57:37 -- pm/common@21 -- # date +%s 00:03:49.378 11:57:37 -- pm/common@25 -- # sleep 1 00:03:49.378 11:57:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721995057 00:03:49.378 11:57:37 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721995057 00:03:49.378 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721995057_collect-vmstat.pm.log 00:03:49.378 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721995057_collect-cpu-load.pm.log 00:03:50.337 11:57:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:50.337 11:57:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:50.337 11:57:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:50.337 11:57:38 -- common/autotest_common.sh@10 -- # set +x 00:03:50.337 11:57:38 -- spdk/autotest.sh@59 -- # create_test_list 00:03:50.337 11:57:38 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:50.337 11:57:38 -- common/autotest_common.sh@10 -- # set +x 00:03:50.337 11:57:38 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:50.337 11:57:38 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:50.337 11:57:38 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:50.337 11:57:38 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:50.337 11:57:38 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:50.337 11:57:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:50.337 11:57:38 -- common/autotest_common.sh@1455 -- # uname 00:03:50.337 11:57:38 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:50.337 11:57:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:50.337 11:57:38 -- common/autotest_common.sh@1475 -- # uname 00:03:50.337 11:57:38 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:50.337 11:57:38 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:50.337 11:57:38 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:50.337 11:57:38 -- spdk/autotest.sh@72 -- # hash lcov 00:03:50.337 11:57:38 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:50.337 11:57:38 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:50.337 --rc lcov_branch_coverage=1 00:03:50.337 --rc lcov_function_coverage=1 00:03:50.337 --rc genhtml_branch_coverage=1 00:03:50.337 --rc genhtml_function_coverage=1 00:03:50.337 --rc genhtml_legend=1 00:03:50.337 --rc geninfo_all_blocks=1 00:03:50.337 ' 00:03:50.337 11:57:38 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:50.337 --rc lcov_branch_coverage=1 00:03:50.337 --rc lcov_function_coverage=1 00:03:50.337 --rc genhtml_branch_coverage=1 00:03:50.337 --rc genhtml_function_coverage=1 00:03:50.337 --rc genhtml_legend=1 00:03:50.337 --rc geninfo_all_blocks=1 00:03:50.337 ' 00:03:50.337 11:57:38 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:50.337 --rc lcov_branch_coverage=1 00:03:50.337 --rc lcov_function_coverage=1 00:03:50.337 --rc genhtml_branch_coverage=1 00:03:50.337 --rc genhtml_function_coverage=1 00:03:50.337 --rc genhtml_legend=1 00:03:50.337 --rc geninfo_all_blocks=1 00:03:50.337 --no-external' 00:03:50.337 11:57:38 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:50.337 --rc lcov_branch_coverage=1 00:03:50.337 --rc lcov_function_coverage=1 00:03:50.337 --rc genhtml_branch_coverage=1 00:03:50.337 --rc genhtml_function_coverage=1 00:03:50.337 --rc genhtml_legend=1 00:03:50.337 --rc geninfo_all_blocks=1 00:03:50.337 --no-external' 00:03:50.337 11:57:38 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:50.597 lcov: LCOV version 1.14 00:03:50.597 11:57:38 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:05.487 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:05.487 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:17.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:17.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:17.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:17.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:17.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:17.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:17.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:17.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:17.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:17.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:17.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:17.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:17.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:17.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:17.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:17.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:17.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:17.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:17.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:17.735 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:17.735 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:17.736 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:17.736 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:17.995 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:17.995 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:17.995 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:17.995 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:17.995 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:17.995 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:17.995 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:17.995 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:17.995 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:17.995 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:17.995 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:17.995 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:17.995 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:17.995 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:17.996 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:17.996 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:18.255 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:18.255 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:18.255 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:18.255 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:18.255 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:18.255 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:18.255 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:18.255 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:18.255 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:18.255 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:18.255 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:18.255 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:18.255 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:18.255 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:18.255 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:18.255 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:21.542 11:58:09 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:21.542 11:58:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:21.542 11:58:09 -- common/autotest_common.sh@10 -- # set +x 00:04:21.542 11:58:09 -- spdk/autotest.sh@91 -- # rm -f 00:04:21.542 11:58:09 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:22.109 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.678 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:22.678 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:22.678 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:22.678 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:22.678 11:58:10 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:22.678 11:58:10 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:22.678 11:58:10 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:22.678 11:58:10 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:22.678 11:58:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.678 11:58:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:22.678 11:58:10 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:22.678 11:58:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:22.678 11:58:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.678 11:58:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.678 11:58:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:22.678 11:58:10 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:22.678 11:58:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:22.678 11:58:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.678 11:58:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.678 11:58:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:22.678 11:58:10 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:22.678 11:58:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:22.678 11:58:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.678 11:58:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.678 11:58:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:04:22.678 11:58:10 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:04:22.678 11:58:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:22.678 11:58:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.678 11:58:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.678 11:58:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:04:22.678 11:58:10 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:04:22.678 11:58:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:22.678 11:58:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.678 11:58:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.678 11:58:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:04:22.678 11:58:10 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:04:22.678 11:58:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:22.678 11:58:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.678 11:58:10 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.678 11:58:10 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:22.678 11:58:10 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:22.678 11:58:10 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:22.678 11:58:10 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.678 11:58:10 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:22.678 11:58:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.678 11:58:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:22.678 11:58:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:22.678 11:58:10 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:22.678 11:58:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:22.678 No valid GPT data, bailing 00:04:22.678 11:58:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:22.678 11:58:10 -- scripts/common.sh@391 -- # pt= 00:04:22.678 11:58:10 -- scripts/common.sh@392 -- # return 1 00:04:22.678 11:58:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:22.678 1+0 records in 00:04:22.678 1+0 records out 00:04:22.678 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020996 s, 49.9 MB/s 00:04:22.678 11:58:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.678 11:58:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:22.678 11:58:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:22.678 11:58:10 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:22.678 11:58:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:22.937 No valid GPT data, bailing 00:04:22.937 11:58:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:22.937 11:58:10 -- scripts/common.sh@391 -- # pt= 00:04:22.937 11:58:10 -- scripts/common.sh@392 -- # return 1 00:04:22.937 11:58:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:22.937 1+0 records in 00:04:22.937 1+0 records out 00:04:22.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00455542 s, 230 MB/s 00:04:22.937 11:58:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.937 11:58:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:22.937 11:58:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:04:22.937 11:58:10 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:04:22.937 11:58:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:22.937 No valid GPT data, bailing 00:04:22.937 11:58:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:22.937 11:58:10 -- scripts/common.sh@391 -- # pt= 00:04:22.937 11:58:10 -- scripts/common.sh@392 -- # return 1 00:04:22.937 11:58:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:22.937 1+0 records in 00:04:22.937 1+0 records out 00:04:22.937 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503415 s, 208 MB/s 00:04:22.937 11:58:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.937 11:58:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:22.937 11:58:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:04:22.937 11:58:10 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:04:22.938 11:58:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:22.938 No valid GPT data, bailing 00:04:22.938 11:58:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:22.938 11:58:10 -- scripts/common.sh@391 -- # pt= 00:04:22.938 11:58:10 -- scripts/common.sh@392 -- # return 1 00:04:22.938 11:58:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:22.938 1+0 records in 00:04:22.938 1+0 records out 00:04:22.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00518128 s, 202 MB/s 00:04:22.938 11:58:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:22.938 11:58:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:22.938 11:58:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:04:22.938 11:58:10 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:04:22.938 11:58:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:23.197 No valid GPT data, bailing 00:04:23.197 11:58:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:23.197 11:58:10 -- scripts/common.sh@391 -- # pt= 00:04:23.197 11:58:10 -- scripts/common.sh@392 -- # return 1 00:04:23.197 11:58:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:23.197 1+0 records in 00:04:23.197 1+0 records out 00:04:23.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00614581 s, 171 MB/s 00:04:23.197 11:58:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:23.197 11:58:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:23.197 11:58:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:04:23.197 11:58:10 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:04:23.197 11:58:10 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:23.197 No valid GPT data, bailing 00:04:23.197 11:58:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:23.197 11:58:11 -- scripts/common.sh@391 -- # pt= 00:04:23.197 11:58:11 -- scripts/common.sh@392 -- # return 1 00:04:23.197 11:58:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:23.197 1+0 records in 00:04:23.197 1+0 records out 00:04:23.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508838 s, 206 MB/s 00:04:23.197 11:58:11 -- spdk/autotest.sh@118 -- # sync 00:04:23.197 11:58:11 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:23.197 11:58:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:23.197 11:58:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:25.728 11:58:13 -- spdk/autotest.sh@124 -- # uname -s 00:04:25.728 11:58:13 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:25.728 11:58:13 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:25.728 11:58:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.728 11:58:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.728 11:58:13 -- common/autotest_common.sh@10 -- # set +x 00:04:25.728 ************************************ 00:04:25.728 START TEST setup.sh 00:04:25.728 ************************************ 00:04:25.728 11:58:13 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:25.728 * Looking for test storage... 00:04:25.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:25.728 11:58:13 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:25.728 11:58:13 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:25.728 11:58:13 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:25.728 11:58:13 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.728 11:58:13 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.728 11:58:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.728 ************************************ 00:04:25.728 START TEST acl 00:04:25.728 ************************************ 00:04:25.728 11:58:13 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:25.986 * Looking for test storage... 00:04:25.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:25.986 11:58:13 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:25.986 11:58:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:25.987 11:58:13 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:25.987 11:58:13 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:04:25.987 11:58:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:04:25.987 11:58:13 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:25.987 11:58:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:25.987 11:58:13 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:25.987 11:58:13 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:25.987 11:58:13 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:25.987 11:58:13 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:25.987 11:58:13 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:25.987 11:58:13 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:25.987 11:58:13 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:25.987 11:58:13 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:25.987 11:58:13 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:25.987 11:58:13 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:25.987 11:58:13 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.987 11:58:13 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.361 11:58:15 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:27.361 11:58:15 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:27.361 11:58:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.361 11:58:15 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:27.361 11:58:15 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.361 11:58:15 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:27.928 11:58:15 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:27.928 11:58:15 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:27.928 11:58:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:28.495 Hugepages 00:04:28.495 node hugesize free / total 00:04:28.495 11:58:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:28.495 11:58:16 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:28.495 11:58:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:28.495 00:04:28.495 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:28.495 11:58:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:28.495 11:58:16 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:28.495 11:58:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:28.753 11:58:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:28.753 11:58:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:28.753 11:58:16 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:28.753 11:58:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:28.753 11:58:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:28.753 11:58:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:28.753 11:58:16 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:28.753 11:58:16 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:28.753 11:58:16 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:28.753 11:58:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:29.012 11:58:16 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:29.012 11:58:16 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.012 11:58:16 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.012 11:58:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:29.012 ************************************ 00:04:29.012 START TEST denied 00:04:29.012 ************************************ 00:04:29.012 11:58:16 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:29.012 11:58:16 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:29.012 11:58:16 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:29.012 11:58:16 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:29.012 11:58:16 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.012 11:58:16 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:30.912 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:30.912 11:58:18 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:30.912 11:58:18 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:30.912 11:58:18 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:30.912 11:58:18 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:30.913 11:58:18 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:30.913 11:58:18 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:30.913 11:58:18 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:30.913 11:58:18 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:30.913 11:58:18 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.913 11:58:18 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.477 00:04:37.477 real 0m7.831s 00:04:37.477 user 0m0.996s 00:04:37.477 sys 0m1.943s 00:04:37.477 11:58:24 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.477 11:58:24 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:37.477 ************************************ 00:04:37.477 END TEST denied 00:04:37.477 ************************************ 00:04:37.477 11:58:24 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:37.477 11:58:24 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.477 11:58:24 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.477 11:58:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:37.478 ************************************ 00:04:37.478 START TEST allowed 00:04:37.478 ************************************ 00:04:37.478 11:58:24 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:37.478 11:58:24 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:37.478 11:58:24 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:37.478 11:58:24 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:37.478 11:58:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.478 11:58:24 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:38.413 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.413 11:58:26 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.789 00:04:39.789 real 0m2.752s 00:04:39.789 user 0m1.170s 00:04:39.789 sys 0m1.603s 00:04:39.789 11:58:27 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.789 ************************************ 00:04:39.789 END TEST allowed 00:04:39.789 ************************************ 00:04:39.789 11:58:27 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:39.789 ************************************ 00:04:39.789 END TEST acl 00:04:39.789 ************************************ 00:04:39.789 00:04:39.789 real 0m14.049s 00:04:39.789 user 0m3.556s 00:04:39.789 sys 0m5.657s 00:04:39.789 11:58:27 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.789 11:58:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:39.789 11:58:27 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:39.789 11:58:27 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.789 11:58:27 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.789 11:58:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.789 ************************************ 00:04:39.789 START TEST hugepages 00:04:39.789 ************************************ 00:04:39.789 11:58:27 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:40.049 * Looking for test storage... 00:04:40.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 5820672 kB' 'MemAvailable: 7408076 kB' 'Buffers: 2436 kB' 'Cached: 1800708 kB' 'SwapCached: 0 kB' 'Active: 454388 kB' 'Inactive: 1460648 kB' 'Active(anon): 122404 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 113576 kB' 'Mapped: 48612 kB' 'Shmem: 10512 kB' 'KReclaimable: 63420 kB' 'Slab: 138064 kB' 'SReclaimable: 63420 kB' 'SUnreclaim: 74644 kB' 'KernelStack: 6252 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 346904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.049 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.050 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:40.051 11:58:27 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:40.051 11:58:27 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.051 11:58:27 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.051 11:58:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:40.051 ************************************ 00:04:40.051 START TEST default_setup 00:04:40.051 ************************************ 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.051 11:58:27 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.554 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.554 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.554 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.818 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927784 kB' 'MemAvailable: 9514964 kB' 'Buffers: 2436 kB' 'Cached: 1800692 kB' 'SwapCached: 0 kB' 'Active: 464960 kB' 'Inactive: 1460672 kB' 'Active(anon): 132976 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123936 kB' 'Mapped: 48844 kB' 'Shmem: 10472 kB' 'KReclaimable: 62928 kB' 'Slab: 137212 kB' 'SReclaimable: 62928 kB' 'SUnreclaim: 74284 kB' 'KernelStack: 6320 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.818 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927784 kB' 'MemAvailable: 9514968 kB' 'Buffers: 2436 kB' 'Cached: 1800696 kB' 'SwapCached: 0 kB' 'Active: 464384 kB' 'Inactive: 1460676 kB' 'Active(anon): 132400 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 123532 kB' 'Mapped: 48596 kB' 'Shmem: 10472 kB' 'KReclaimable: 62928 kB' 'Slab: 137224 kB' 'SReclaimable: 62928 kB' 'SUnreclaim: 74296 kB' 'KernelStack: 6272 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.819 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.820 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.821 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927784 kB' 'MemAvailable: 9514968 kB' 'Buffers: 2436 kB' 'Cached: 1800696 kB' 'SwapCached: 0 kB' 'Active: 464284 kB' 'Inactive: 1460676 kB' 'Active(anon): 132300 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 123452 kB' 'Mapped: 48596 kB' 'Shmem: 10472 kB' 'KReclaimable: 62928 kB' 'Slab: 137220 kB' 'SReclaimable: 62928 kB' 'SUnreclaim: 74292 kB' 'KernelStack: 6256 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.822 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.823 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.824 nr_hugepages=1024 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.824 resv_hugepages=0 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.824 surplus_hugepages=0 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.824 anon_hugepages=0 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927028 kB' 'MemAvailable: 9514212 kB' 'Buffers: 2436 kB' 'Cached: 1800696 kB' 'SwapCached: 0 kB' 'Active: 464544 kB' 'Inactive: 1460676 kB' 'Active(anon): 132560 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 123712 kB' 'Mapped: 48596 kB' 'Shmem: 10472 kB' 'KReclaimable: 62928 kB' 'Slab: 137220 kB' 'SReclaimable: 62928 kB' 'SUnreclaim: 74292 kB' 'KernelStack: 6256 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.824 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.825 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.826 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927028 kB' 'MemUsed: 4314940 kB' 'SwapCached: 0 kB' 'Active: 464380 kB' 'Inactive: 1460676 kB' 'Active(anon): 132396 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 1803132 kB' 'Mapped: 48596 kB' 'AnonPages: 123520 kB' 'Shmem: 10472 kB' 'KernelStack: 6272 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62928 kB' 'Slab: 137212 kB' 'SReclaimable: 62928 kB' 'SUnreclaim: 74284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.827 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.828 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:41.829 node0=1024 expecting 1024 00:04:41.829 ************************************ 00:04:41.829 END TEST default_setup 00:04:41.829 ************************************ 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:41.829 00:04:41.829 real 0m1.811s 00:04:41.829 user 0m0.702s 00:04:41.829 sys 0m1.084s 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.829 11:58:29 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:42.088 11:58:29 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:42.088 11:58:29 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.088 11:58:29 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.088 11:58:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:42.088 ************************************ 00:04:42.088 START TEST per_node_1G_alloc 00:04:42.088 ************************************ 00:04:42.088 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:42.088 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:42.088 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:42.088 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:42.088 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:42.088 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.089 11:58:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.920 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.921 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.921 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.921 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8974256 kB' 'MemAvailable: 10561440 kB' 'Buffers: 2436 kB' 'Cached: 1800696 kB' 'SwapCached: 0 kB' 'Active: 464572 kB' 'Inactive: 1460676 kB' 'Active(anon): 132588 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123680 kB' 'Mapped: 48780 kB' 'Shmem: 10472 kB' 'KReclaimable: 62928 kB' 'Slab: 137296 kB' 'SReclaimable: 62928 kB' 'SUnreclaim: 74368 kB' 'KernelStack: 6248 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8974260 kB' 'MemAvailable: 10561444 kB' 'Buffers: 2436 kB' 'Cached: 1800696 kB' 'SwapCached: 0 kB' 'Active: 464540 kB' 'Inactive: 1460676 kB' 'Active(anon): 132556 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123664 kB' 'Mapped: 48596 kB' 'Shmem: 10472 kB' 'KReclaimable: 62928 kB' 'Slab: 137340 kB' 'SReclaimable: 62928 kB' 'SUnreclaim: 74412 kB' 'KernelStack: 6288 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.924 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8974260 kB' 'MemAvailable: 10561444 kB' 'Buffers: 2436 kB' 'Cached: 1800696 kB' 'SwapCached: 0 kB' 'Active: 464184 kB' 'Inactive: 1460676 kB' 'Active(anon): 132200 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123556 kB' 'Mapped: 48596 kB' 'Shmem: 10472 kB' 'KReclaimable: 62928 kB' 'Slab: 137336 kB' 'SReclaimable: 62928 kB' 'SUnreclaim: 74408 kB' 'KernelStack: 6272 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.925 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.926 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:42.927 nr_hugepages=512 00:04:42.927 resv_hugepages=0 00:04:42.927 surplus_hugepages=0 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.927 anon_hugepages=0 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:42.927 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8974260 kB' 'MemAvailable: 10561444 kB' 'Buffers: 2436 kB' 'Cached: 1800696 kB' 'SwapCached: 0 kB' 'Active: 464444 kB' 'Inactive: 1460676 kB' 'Active(anon): 132460 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123568 kB' 'Mapped: 48596 kB' 'Shmem: 10472 kB' 'KReclaimable: 62928 kB' 'Slab: 137332 kB' 'SReclaimable: 62928 kB' 'SUnreclaim: 74404 kB' 'KernelStack: 6272 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 363312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.928 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.929 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8974260 kB' 'MemUsed: 3267708 kB' 'SwapCached: 0 kB' 'Active: 464440 kB' 'Inactive: 1460676 kB' 'Active(anon): 132456 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460676 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1803132 kB' 'Mapped: 48596 kB' 'AnonPages: 123556 kB' 'Shmem: 10472 kB' 'KernelStack: 6272 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62928 kB' 'Slab: 137324 kB' 'SReclaimable: 62928 kB' 'SUnreclaim: 74396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.930 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.931 node0=512 expecting 512 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:42.931 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:43.191 00:04:43.191 real 0m1.044s 00:04:43.191 user 0m0.427s 00:04:43.191 sys 0m0.646s 00:04:43.191 ************************************ 00:04:43.191 END TEST per_node_1G_alloc 00:04:43.191 ************************************ 00:04:43.191 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.191 11:58:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:43.191 11:58:30 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:43.191 11:58:30 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.191 11:58:30 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.191 11:58:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.191 ************************************ 00:04:43.191 START TEST even_2G_alloc 00:04:43.191 ************************************ 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.191 11:58:30 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.021 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.021 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.021 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.021 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927908 kB' 'MemAvailable: 9515100 kB' 'Buffers: 2436 kB' 'Cached: 1800704 kB' 'SwapCached: 0 kB' 'Active: 464356 kB' 'Inactive: 1460684 kB' 'Active(anon): 132372 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123460 kB' 'Mapped: 48724 kB' 'Shmem: 10472 kB' 'KReclaimable: 62928 kB' 'Slab: 137244 kB' 'SReclaimable: 62928 kB' 'SUnreclaim: 74316 kB' 'KernelStack: 6264 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.021 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.022 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927908 kB' 'MemAvailable: 9515100 kB' 'Buffers: 2436 kB' 'Cached: 1800704 kB' 'SwapCached: 0 kB' 'Active: 464164 kB' 'Inactive: 1460684 kB' 'Active(anon): 132180 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123568 kB' 'Mapped: 48596 kB' 'Shmem: 10472 kB' 'KReclaimable: 62928 kB' 'Slab: 137264 kB' 'SReclaimable: 62928 kB' 'SUnreclaim: 74336 kB' 'KernelStack: 6272 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.023 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.024 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927908 kB' 'MemAvailable: 9515100 kB' 'Buffers: 2436 kB' 'Cached: 1800704 kB' 'SwapCached: 0 kB' 'Active: 464236 kB' 'Inactive: 1460684 kB' 'Active(anon): 132252 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123656 kB' 'Mapped: 48596 kB' 'Shmem: 10472 kB' 'KReclaimable: 62928 kB' 'Slab: 137264 kB' 'SReclaimable: 62928 kB' 'SUnreclaim: 74336 kB' 'KernelStack: 6288 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.025 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.026 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.027 nr_hugepages=1024 00:04:44.027 resv_hugepages=0 00:04:44.027 surplus_hugepages=0 00:04:44.027 anon_hugepages=0 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7927908 kB' 'MemAvailable: 9515100 kB' 'Buffers: 2436 kB' 'Cached: 1800704 kB' 'SwapCached: 0 kB' 'Active: 464228 kB' 'Inactive: 1460684 kB' 'Active(anon): 132244 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123408 kB' 'Mapped: 48596 kB' 'Shmem: 10472 kB' 'KReclaimable: 62928 kB' 'Slab: 137260 kB' 'SReclaimable: 62928 kB' 'SUnreclaim: 74332 kB' 'KernelStack: 6288 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 363312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.027 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.028 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7928492 kB' 'MemUsed: 4313476 kB' 'SwapCached: 0 kB' 'Active: 464172 kB' 'Inactive: 1460684 kB' 'Active(anon): 132188 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1803140 kB' 'Mapped: 48596 kB' 'AnonPages: 123568 kB' 'Shmem: 10472 kB' 'KernelStack: 6272 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62928 kB' 'Slab: 137252 kB' 'SReclaimable: 62928 kB' 'SUnreclaim: 74324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.029 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.289 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.290 node0=1024 expecting 1024 00:04:44.290 ************************************ 00:04:44.290 END TEST even_2G_alloc 00:04:44.290 ************************************ 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:44.290 00:04:44.290 real 0m1.066s 00:04:44.290 user 0m0.451s 00:04:44.290 sys 0m0.647s 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.290 11:58:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:44.290 11:58:32 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:44.290 11:58:32 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.290 11:58:32 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.290 11:58:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:44.290 ************************************ 00:04:44.290 START TEST odd_alloc 00:04:44.290 ************************************ 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.290 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.123 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.123 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.123 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.123 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7923912 kB' 'MemAvailable: 9511100 kB' 'Buffers: 2436 kB' 'Cached: 1800700 kB' 'SwapCached: 0 kB' 'Active: 460852 kB' 'Inactive: 1460680 kB' 'Active(anon): 128868 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 119960 kB' 'Mapped: 47964 kB' 'Shmem: 10472 kB' 'KReclaimable: 62924 kB' 'Slab: 137184 kB' 'SReclaimable: 62924 kB' 'SUnreclaim: 74260 kB' 'KernelStack: 6216 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.123 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:45.124 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7923912 kB' 'MemAvailable: 9511096 kB' 'Buffers: 2436 kB' 'Cached: 1800700 kB' 'SwapCached: 0 kB' 'Active: 460764 kB' 'Inactive: 1460680 kB' 'Active(anon): 128780 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 119908 kB' 'Mapped: 47856 kB' 'Shmem: 10472 kB' 'KReclaimable: 62920 kB' 'Slab: 137156 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 74236 kB' 'KernelStack: 6192 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.125 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.126 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7923912 kB' 'MemAvailable: 9511096 kB' 'Buffers: 2436 kB' 'Cached: 1800700 kB' 'SwapCached: 0 kB' 'Active: 460388 kB' 'Inactive: 1460680 kB' 'Active(anon): 128404 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 119756 kB' 'Mapped: 47856 kB' 'Shmem: 10472 kB' 'KReclaimable: 62920 kB' 'Slab: 137144 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 74224 kB' 'KernelStack: 6176 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.127 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.128 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:45.129 nr_hugepages=1025 00:04:45.129 resv_hugepages=0 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.129 surplus_hugepages=0 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.129 anon_hugepages=0 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7923912 kB' 'MemAvailable: 9511096 kB' 'Buffers: 2436 kB' 'Cached: 1800700 kB' 'SwapCached: 0 kB' 'Active: 460656 kB' 'Inactive: 1460680 kB' 'Active(anon): 128672 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 119776 kB' 'Mapped: 47856 kB' 'Shmem: 10472 kB' 'KReclaimable: 62920 kB' 'Slab: 137140 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 74220 kB' 'KernelStack: 6176 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 347424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.129 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:45.130 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7923912 kB' 'MemUsed: 4318056 kB' 'SwapCached: 0 kB' 'Active: 460660 kB' 'Inactive: 1460680 kB' 'Active(anon): 128676 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1803136 kB' 'Mapped: 47856 kB' 'AnonPages: 119788 kB' 'Shmem: 10472 kB' 'KernelStack: 6176 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62920 kB' 'Slab: 137140 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 74220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.131 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:45.132 node0=1025 expecting 1025 00:04:45.132 ************************************ 00:04:45.132 END TEST odd_alloc 00:04:45.132 ************************************ 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:45.132 00:04:45.132 real 0m0.984s 00:04:45.132 user 0m0.424s 00:04:45.132 sys 0m0.574s 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.132 11:58:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:45.391 11:58:33 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:45.391 11:58:33 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.391 11:58:33 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.391 11:58:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:45.391 ************************************ 00:04:45.391 START TEST custom_alloc 00:04:45.391 ************************************ 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:45.391 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.392 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.392 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:45.392 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:45.392 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:45.392 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:45.392 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:45.392 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:45.392 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:45.392 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.392 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.960 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.960 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.960 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.224 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8974796 kB' 'MemAvailable: 10561980 kB' 'Buffers: 2436 kB' 'Cached: 1800700 kB' 'SwapCached: 0 kB' 'Active: 461272 kB' 'Inactive: 1460680 kB' 'Active(anon): 129288 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 120288 kB' 'Mapped: 47988 kB' 'Shmem: 10472 kB' 'KReclaimable: 62920 kB' 'Slab: 137096 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 74176 kB' 'KernelStack: 6292 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.224 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.225 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8974796 kB' 'MemAvailable: 10561980 kB' 'Buffers: 2436 kB' 'Cached: 1800700 kB' 'SwapCached: 0 kB' 'Active: 460864 kB' 'Inactive: 1460680 kB' 'Active(anon): 128880 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120136 kB' 'Mapped: 47816 kB' 'Shmem: 10472 kB' 'KReclaimable: 62920 kB' 'Slab: 137100 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 74180 kB' 'KernelStack: 6268 kB' 'PageTables: 4056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.226 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.227 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8974796 kB' 'MemAvailable: 10561980 kB' 'Buffers: 2436 kB' 'Cached: 1800700 kB' 'SwapCached: 0 kB' 'Active: 460820 kB' 'Inactive: 1460680 kB' 'Active(anon): 128836 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 120096 kB' 'Mapped: 47924 kB' 'Shmem: 10472 kB' 'KReclaimable: 62920 kB' 'Slab: 137108 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 74188 kB' 'KernelStack: 6316 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.228 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.229 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:46.230 nr_hugepages=512 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.230 resv_hugepages=0 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.230 surplus_hugepages=0 00:04:46.230 anon_hugepages=0 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8976484 kB' 'MemAvailable: 10563668 kB' 'Buffers: 2436 kB' 'Cached: 1800700 kB' 'SwapCached: 0 kB' 'Active: 460684 kB' 'Inactive: 1460680 kB' 'Active(anon): 128700 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 119952 kB' 'Mapped: 47924 kB' 'Shmem: 10472 kB' 'KReclaimable: 62920 kB' 'Slab: 137108 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 74188 kB' 'KernelStack: 6284 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 347424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.230 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.231 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8976484 kB' 'MemUsed: 3265484 kB' 'SwapCached: 0 kB' 'Active: 460576 kB' 'Inactive: 1460680 kB' 'Active(anon): 128592 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460680 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1803136 kB' 'Mapped: 47924 kB' 'AnonPages: 120104 kB' 'Shmem: 10472 kB' 'KernelStack: 6284 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62920 kB' 'Slab: 137104 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 74184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.232 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.233 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:46.234 node0=512 expecting 512 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:46.234 ************************************ 00:04:46.234 END TEST custom_alloc 00:04:46.234 ************************************ 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:46.234 00:04:46.234 real 0m1.004s 00:04:46.234 user 0m0.419s 00:04:46.234 sys 0m0.619s 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.234 11:58:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:46.493 11:58:34 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:46.493 11:58:34 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.493 11:58:34 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.493 11:58:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.493 ************************************ 00:04:46.493 START TEST no_shrink_alloc 00:04:46.493 ************************************ 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.493 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.068 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.068 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.068 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.068 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7929684 kB' 'MemAvailable: 9516872 kB' 'Buffers: 2436 kB' 'Cached: 1800704 kB' 'SwapCached: 0 kB' 'Active: 460948 kB' 'Inactive: 1460684 kB' 'Active(anon): 128964 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 120028 kB' 'Mapped: 47984 kB' 'Shmem: 10472 kB' 'KReclaimable: 62920 kB' 'Slab: 136980 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 74060 kB' 'KernelStack: 6168 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.068 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.069 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.070 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.070 11:58:34 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7929432 kB' 'MemAvailable: 9516620 kB' 'Buffers: 2436 kB' 'Cached: 1800704 kB' 'SwapCached: 0 kB' 'Active: 460492 kB' 'Inactive: 1460684 kB' 'Active(anon): 128508 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119656 kB' 'Mapped: 47860 kB' 'Shmem: 10472 kB' 'KReclaimable: 62920 kB' 'Slab: 137012 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 74092 kB' 'KernelStack: 6192 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.070 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.071 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7929432 kB' 'MemAvailable: 9516620 kB' 'Buffers: 2436 kB' 'Cached: 1800704 kB' 'SwapCached: 0 kB' 'Active: 460740 kB' 'Inactive: 1460684 kB' 'Active(anon): 128756 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119900 kB' 'Mapped: 47860 kB' 'Shmem: 10472 kB' 'KReclaimable: 62920 kB' 'Slab: 137012 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 74092 kB' 'KernelStack: 6192 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.072 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.359 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.360 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:47.361 nr_hugepages=1024 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.361 resv_hugepages=0 00:04:47.361 surplus_hugepages=0 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.361 anon_hugepages=0 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7929432 kB' 'MemAvailable: 9516620 kB' 'Buffers: 2436 kB' 'Cached: 1800704 kB' 'SwapCached: 0 kB' 'Active: 460732 kB' 'Inactive: 1460684 kB' 'Active(anon): 128748 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119908 kB' 'Mapped: 47860 kB' 'Shmem: 10472 kB' 'KReclaimable: 62920 kB' 'Slab: 137012 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 74092 kB' 'KernelStack: 6192 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54916 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.361 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.362 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7929432 kB' 'MemUsed: 4312536 kB' 'SwapCached: 0 kB' 'Active: 460720 kB' 'Inactive: 1460684 kB' 'Active(anon): 128736 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1803140 kB' 'Mapped: 47860 kB' 'AnonPages: 119828 kB' 'Shmem: 10472 kB' 'KernelStack: 6176 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62920 kB' 'Slab: 137008 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 74088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.363 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.364 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.365 node0=1024 expecting 1024 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.365 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:47.885 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.885 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.885 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.885 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:47.885 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:47.885 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:47.885 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.885 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.885 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.885 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.885 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.885 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.885 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.885 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.885 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.885 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:47.885 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.885 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.885 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.885 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7930688 kB' 'MemAvailable: 9517876 kB' 'Buffers: 2436 kB' 'Cached: 1800704 kB' 'SwapCached: 0 kB' 'Active: 461340 kB' 'Inactive: 1460684 kB' 'Active(anon): 129356 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 120320 kB' 'Mapped: 48472 kB' 'Shmem: 10472 kB' 'KReclaimable: 62920 kB' 'Slab: 136852 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 73932 kB' 'KernelStack: 6252 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.886 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7931348 kB' 'MemAvailable: 9518536 kB' 'Buffers: 2436 kB' 'Cached: 1800704 kB' 'SwapCached: 0 kB' 'Active: 460696 kB' 'Inactive: 1460684 kB' 'Active(anon): 128712 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 119844 kB' 'Mapped: 47860 kB' 'Shmem: 10472 kB' 'KReclaimable: 62920 kB' 'Slab: 136880 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 73960 kB' 'KernelStack: 6176 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.887 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.888 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7931348 kB' 'MemAvailable: 9518536 kB' 'Buffers: 2436 kB' 'Cached: 1800704 kB' 'SwapCached: 0 kB' 'Active: 460868 kB' 'Inactive: 1460684 kB' 'Active(anon): 128884 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 120020 kB' 'Mapped: 47860 kB' 'Shmem: 10472 kB' 'KReclaimable: 62920 kB' 'Slab: 136876 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 73956 kB' 'KernelStack: 6176 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.889 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.890 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.893 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.894 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:48.157 nr_hugepages=1024 00:04:48.157 resv_hugepages=0 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.157 surplus_hugepages=0 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.157 anon_hugepages=0 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.157 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7931348 kB' 'MemAvailable: 9518536 kB' 'Buffers: 2436 kB' 'Cached: 1800704 kB' 'SwapCached: 0 kB' 'Active: 460656 kB' 'Inactive: 1460684 kB' 'Active(anon): 128672 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 119808 kB' 'Mapped: 47860 kB' 'Shmem: 10472 kB' 'KReclaimable: 62920 kB' 'Slab: 136872 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 73952 kB' 'KernelStack: 6176 kB' 'PageTables: 3808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 347552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 5064704 kB' 'DirectMap1G: 9437184 kB' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.158 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.159 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7931348 kB' 'MemUsed: 4310620 kB' 'SwapCached: 0 kB' 'Active: 460864 kB' 'Inactive: 1460684 kB' 'Active(anon): 128880 kB' 'Inactive(anon): 0 kB' 'Active(file): 331984 kB' 'Inactive(file): 1460684 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1803140 kB' 'Mapped: 47860 kB' 'AnonPages: 120016 kB' 'Shmem: 10472 kB' 'KernelStack: 6160 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62920 kB' 'Slab: 136868 kB' 'SReclaimable: 62920 kB' 'SUnreclaim: 73948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.160 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:48.161 node0=1024 expecting 1024 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:48.161 00:04:48.161 real 0m1.692s 00:04:48.161 user 0m0.769s 00:04:48.161 sys 0m1.031s 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.161 11:58:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:48.161 ************************************ 00:04:48.161 END TEST no_shrink_alloc 00:04:48.161 ************************************ 00:04:48.161 11:58:35 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:48.161 11:58:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:48.161 11:58:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:48.161 11:58:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.161 11:58:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:48.161 11:58:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:48.161 11:58:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:48.161 11:58:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:48.161 11:58:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:48.161 00:04:48.161 real 0m8.238s 00:04:48.161 user 0m3.405s 00:04:48.161 sys 0m5.007s 00:04:48.161 11:58:35 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.161 11:58:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:48.161 ************************************ 00:04:48.161 END TEST hugepages 00:04:48.161 ************************************ 00:04:48.161 11:58:36 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:48.161 11:58:36 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.161 11:58:36 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.161 11:58:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:48.161 ************************************ 00:04:48.161 START TEST driver 00:04:48.161 ************************************ 00:04:48.161 11:58:36 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:48.420 * Looking for test storage... 00:04:48.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:48.420 11:58:36 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:48.420 11:58:36 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.420 11:58:36 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.058 11:58:42 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:55.058 11:58:42 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.058 11:58:42 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.058 11:58:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:55.058 ************************************ 00:04:55.058 START TEST guess_driver 00:04:55.058 ************************************ 00:04:55.058 11:58:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:55.058 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:55.058 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:55.058 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:55.058 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:55.058 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:55.059 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:55.059 Looking for driver=uio_pci_generic 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:55.059 11:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.626 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.626 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:55.626 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.626 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.626 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:55.626 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.626 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.626 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:55.626 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.626 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.626 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:55.626 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.885 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:55.885 11:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:55.885 11:58:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:55.885 11:58:43 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:02.504 00:05:02.504 real 0m7.682s 00:05:02.504 user 0m0.931s 00:05:02.504 sys 0m1.927s 00:05:02.504 11:58:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.504 11:58:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:02.504 ************************************ 00:05:02.504 END TEST guess_driver 00:05:02.504 ************************************ 00:05:02.504 00:05:02.504 real 0m13.934s 00:05:02.504 user 0m1.315s 00:05:02.504 sys 0m2.932s 00:05:02.504 11:58:49 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.504 11:58:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:02.504 ************************************ 00:05:02.504 END TEST driver 00:05:02.504 ************************************ 00:05:02.504 11:58:50 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:02.504 11:58:50 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.504 11:58:50 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.504 11:58:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:02.504 ************************************ 00:05:02.504 START TEST devices 00:05:02.504 ************************************ 00:05:02.504 11:58:50 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:02.504 * Looking for test storage... 00:05:02.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:02.504 11:58:50 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:02.504 11:58:50 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:02.504 11:58:50 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.504 11:58:50 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:03.439 11:58:51 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:03.698 11:58:51 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:03.698 No valid GPT data, bailing 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:03.698 11:58:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:03.698 11:58:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:03.698 11:58:51 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:03.698 No valid GPT data, bailing 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:03.698 11:58:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:03.698 11:58:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:03.698 11:58:51 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:03.698 No valid GPT data, bailing 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:03.698 11:58:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:03.698 11:58:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:03.698 11:58:51 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:03.698 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:05:03.698 11:58:51 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:05:03.957 No valid GPT data, bailing 00:05:03.957 11:58:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:03.957 11:58:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:03.957 11:58:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:03.957 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:05:03.957 11:58:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:05:03.957 11:58:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:05:03.957 11:58:51 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:03.957 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:03.957 11:58:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:03.957 11:58:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:03.957 11:58:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:03.957 11:58:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:05:03.957 11:58:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:03.957 11:58:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:03.957 11:58:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:03.957 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:05:03.957 11:58:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:05:03.957 11:58:51 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:05:03.958 No valid GPT data, bailing 00:05:03.958 11:58:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:03.958 11:58:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:03.958 11:58:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:03.958 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:05:03.958 11:58:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:05:03.958 11:58:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:05:03.958 11:58:51 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:03.958 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:03.958 11:58:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:03.958 11:58:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:03.958 11:58:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:03.958 11:58:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:03.958 11:58:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:03.958 11:58:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:05:03.958 11:58:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:03.958 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:03.958 11:58:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:05:03.958 11:58:51 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:03.958 No valid GPT data, bailing 00:05:03.958 11:58:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:03.958 11:58:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:03.958 11:58:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:03.958 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:03.958 11:58:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:03.958 11:58:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:03.958 11:58:51 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:05:03.958 11:58:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:03.958 11:58:51 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:03.958 11:58:51 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:03.958 11:58:51 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:03.958 11:58:51 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.958 11:58:51 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.958 11:58:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:03.958 ************************************ 00:05:03.958 START TEST nvme_mount 00:05:03.958 ************************************ 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:03.958 11:58:51 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:05.334 Creating new GPT entries in memory. 00:05:05.334 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:05.334 other utilities. 00:05:05.334 11:58:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:05.334 11:58:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:05.334 11:58:52 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:05.334 11:58:52 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:05.334 11:58:52 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:06.271 Creating new GPT entries in memory. 00:05:06.271 The operation has completed successfully. 00:05:06.271 11:58:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:06.271 11:58:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.271 11:58:53 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59384 00:05:06.271 11:58:53 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:06.271 11:58:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:06.271 11:58:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:06.271 11:58:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:06.271 11:58:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:06.271 11:58:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:06.271 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:06.271 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:06.271 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:06.271 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:06.271 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:06.271 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.271 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:06.271 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:06.271 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.271 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:06.271 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.271 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.271 11:58:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.271 11:58:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:06.530 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:06.530 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:06.530 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:06.530 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.530 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:06.530 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.789 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:06.789 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.789 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:06.789 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.789 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:06.789 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.048 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:07.048 11:58:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.307 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.307 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:07.307 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:07.307 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:07.307 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:07.307 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:07.307 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:07.307 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:07.566 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.566 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:07.566 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:07.566 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:07.566 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:07.825 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:07.825 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:07.825 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:07.825 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.825 11:58:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:08.095 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:08.095 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:08.095 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:08.095 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.095 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:08.095 11:58:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.378 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:08.378 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.378 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:08.378 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.378 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:08.378 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.637 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:08.637 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.897 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.897 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:08.897 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.897 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:08.897 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:08.897 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.156 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:09.156 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:09.156 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:09.156 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:09.156 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:09.156 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:09.156 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:09.156 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:09.156 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.156 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:09.156 11:58:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:09.156 11:58:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.156 11:58:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:09.415 11:58:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:09.415 11:58:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:09.415 11:58:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:09.415 11:58:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.415 11:58:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:09.415 11:58:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.674 11:58:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:09.674 11:58:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.674 11:58:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:09.674 11:58:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.674 11:58:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:09.674 11:58:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.242 11:58:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:10.242 11:58:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.242 11:58:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:10.242 11:58:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:10.242 11:58:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:10.242 11:58:58 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:10.242 11:58:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:10.242 11:58:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:10.242 11:58:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:10.242 11:58:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:10.242 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:10.242 00:05:10.242 real 0m6.301s 00:05:10.242 user 0m1.697s 00:05:10.242 sys 0m2.317s 00:05:10.242 11:58:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.242 11:58:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:10.242 ************************************ 00:05:10.242 END TEST nvme_mount 00:05:10.242 ************************************ 00:05:10.501 11:58:58 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:10.501 11:58:58 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.501 11:58:58 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.501 11:58:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:10.501 ************************************ 00:05:10.501 START TEST dm_mount 00:05:10.501 ************************************ 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:10.501 11:58:58 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:11.437 Creating new GPT entries in memory. 00:05:11.437 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:11.437 other utilities. 00:05:11.437 11:58:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:11.437 11:58:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.437 11:58:59 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:11.437 11:58:59 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:11.437 11:58:59 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:12.373 Creating new GPT entries in memory. 00:05:12.373 The operation has completed successfully. 00:05:12.373 11:59:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:12.373 11:59:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.373 11:59:00 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:12.373 11:59:00 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:12.373 11:59:00 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:13.750 The operation has completed successfully. 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60023 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.750 11:59:01 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.008 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.008 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:14.008 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:14.008 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.008 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.008 11:59:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.305 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.305 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.305 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.305 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.305 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.305 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.563 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:14.563 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.821 11:59:02 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:15.079 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.079 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:15.079 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:15.079 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.079 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.079 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.337 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.337 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.595 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.595 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.595 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.595 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.852 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:15.852 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.113 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.113 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:16.113 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:16.113 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:16.113 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:16.114 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:16.114 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:16.114 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.114 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:16.114 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:16.114 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:16.114 11:59:03 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:16.114 00:05:16.114 real 0m5.697s 00:05:16.114 user 0m1.160s 00:05:16.114 sys 0m1.457s 00:05:16.114 11:59:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.114 ************************************ 00:05:16.114 END TEST dm_mount 00:05:16.114 ************************************ 00:05:16.114 11:59:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:16.114 11:59:04 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:16.114 11:59:04 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:16.114 11:59:04 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:16.114 11:59:04 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.114 11:59:04 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:16.114 11:59:04 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.114 11:59:04 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:16.379 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:16.379 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:16.379 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:16.379 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:16.379 11:59:04 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:16.379 11:59:04 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:16.379 11:59:04 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:16.379 11:59:04 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:16.379 11:59:04 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:16.379 11:59:04 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:16.379 11:59:04 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:16.379 00:05:16.379 real 0m14.260s 00:05:16.379 user 0m3.794s 00:05:16.379 sys 0m4.805s 00:05:16.379 11:59:04 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.379 11:59:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:16.379 ************************************ 00:05:16.379 END TEST devices 00:05:16.379 ************************************ 00:05:16.379 00:05:16.379 real 0m50.891s 00:05:16.379 user 0m12.233s 00:05:16.379 sys 0m18.654s 00:05:16.379 11:59:04 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.379 ************************************ 00:05:16.379 11:59:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:16.379 END TEST setup.sh 00:05:16.379 ************************************ 00:05:16.636 11:59:04 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:17.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.769 Hugepages 00:05:17.769 node hugesize free / total 00:05:17.769 node0 1048576kB 0 / 0 00:05:17.769 node0 2048kB 2048 / 2048 00:05:17.769 00:05:17.769 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:17.769 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:18.028 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:18.028 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:18.028 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:18.286 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:18.286 11:59:06 -- spdk/autotest.sh@130 -- # uname -s 00:05:18.286 11:59:06 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:18.286 11:59:06 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:18.286 11:59:06 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.852 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.787 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.787 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.787 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.787 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.787 11:59:07 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:20.722 11:59:08 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:20.722 11:59:08 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:20.722 11:59:08 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:20.722 11:59:08 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:20.722 11:59:08 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:20.722 11:59:08 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:20.722 11:59:08 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:20.722 11:59:08 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:20.722 11:59:08 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:20.982 11:59:08 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:20.982 11:59:08 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:20.982 11:59:08 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:21.241 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.500 Waiting for block devices as requested 00:05:21.500 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:21.759 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:21.759 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:21.759 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:27.081 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:27.081 11:59:14 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:27.081 11:59:14 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:27.081 11:59:14 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:27.081 11:59:14 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:27.081 11:59:14 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:27.081 11:59:14 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:27.081 11:59:14 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:27.081 11:59:14 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:27.081 11:59:14 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:27.081 11:59:14 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:27.081 11:59:14 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:27.081 11:59:14 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1557 -- # continue 00:05:27.081 11:59:14 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:27.081 11:59:14 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:27.081 11:59:14 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:27.081 11:59:14 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:27.081 11:59:14 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:27.081 11:59:14 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:27.081 11:59:14 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:27.081 11:59:14 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:27.081 11:59:14 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:27.081 11:59:14 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:27.081 11:59:14 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:27.081 11:59:14 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1557 -- # continue 00:05:27.081 11:59:14 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:27.081 11:59:14 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:27.081 11:59:14 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:27.081 11:59:14 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:05:27.081 11:59:14 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:27.081 11:59:14 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:27.081 11:59:14 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:05:27.081 11:59:14 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:05:27.081 11:59:14 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:27.081 11:59:14 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:27.081 11:59:14 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:27.081 11:59:14 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1557 -- # continue 00:05:27.081 11:59:14 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:27.081 11:59:14 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:27.081 11:59:14 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:27.081 11:59:14 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:05:27.081 11:59:14 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:27.081 11:59:14 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:27.081 11:59:14 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:05:27.081 11:59:14 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:05:27.081 11:59:14 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:27.081 11:59:14 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:27.081 11:59:14 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:27.081 11:59:14 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:27.081 11:59:14 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:27.081 11:59:14 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:27.081 11:59:14 -- common/autotest_common.sh@1557 -- # continue 00:05:27.081 11:59:14 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:27.081 11:59:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:27.081 11:59:14 -- common/autotest_common.sh@10 -- # set +x 00:05:27.352 11:59:15 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:27.352 11:59:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:27.352 11:59:15 -- common/autotest_common.sh@10 -- # set +x 00:05:27.352 11:59:15 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.918 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.853 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.853 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.853 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.853 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.853 11:59:16 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:28.853 11:59:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:28.853 11:59:16 -- common/autotest_common.sh@10 -- # set +x 00:05:28.853 11:59:16 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:28.853 11:59:16 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:28.853 11:59:16 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:28.853 11:59:16 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:28.853 11:59:16 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:28.853 11:59:16 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:28.853 11:59:16 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:28.853 11:59:16 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:28.853 11:59:16 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.853 11:59:16 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:28.853 11:59:16 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:29.112 11:59:16 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:29.112 11:59:16 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:29.112 11:59:16 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:29.112 11:59:16 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:29.112 11:59:16 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:29.112 11:59:16 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:29.112 11:59:16 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:29.112 11:59:16 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:29.112 11:59:16 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:29.112 11:59:16 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:29.112 11:59:16 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:29.112 11:59:16 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:29.112 11:59:16 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:29.112 11:59:16 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:29.112 11:59:16 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:29.112 11:59:16 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:29.112 11:59:16 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:29.112 11:59:16 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:29.112 11:59:16 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:29.112 11:59:16 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:29.112 11:59:16 -- common/autotest_common.sh@1593 -- # return 0 00:05:29.112 11:59:16 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:29.112 11:59:16 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:29.112 11:59:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:29.112 11:59:16 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:29.112 11:59:16 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:29.112 11:59:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.112 11:59:16 -- common/autotest_common.sh@10 -- # set +x 00:05:29.112 11:59:16 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:29.112 11:59:16 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:29.112 11:59:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.112 11:59:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.112 11:59:16 -- common/autotest_common.sh@10 -- # set +x 00:05:29.112 ************************************ 00:05:29.112 START TEST env 00:05:29.112 ************************************ 00:05:29.112 11:59:16 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:29.112 * Looking for test storage... 00:05:29.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:29.371 11:59:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:29.371 11:59:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.371 11:59:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.371 11:59:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.371 ************************************ 00:05:29.371 START TEST env_memory 00:05:29.371 ************************************ 00:05:29.371 11:59:17 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:29.371 00:05:29.371 00:05:29.371 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.371 http://cunit.sourceforge.net/ 00:05:29.371 00:05:29.371 00:05:29.371 Suite: memory 00:05:29.371 Test: alloc and free memory map ...[2024-07-26 11:59:17.175922] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:29.371 passed 00:05:29.371 Test: mem map translation ...[2024-07-26 11:59:17.216984] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:29.371 [2024-07-26 11:59:17.217052] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:29.371 [2024-07-26 11:59:17.217129] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:29.371 [2024-07-26 11:59:17.217150] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:29.371 passed 00:05:29.371 Test: mem map registration ...[2024-07-26 11:59:17.281334] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:29.371 [2024-07-26 11:59:17.281402] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:29.371 passed 00:05:29.630 Test: mem map adjacent registrations ...passed 00:05:29.630 00:05:29.631 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.631 suites 1 1 n/a 0 0 00:05:29.631 tests 4 4 4 0 0 00:05:29.631 asserts 152 152 152 0 n/a 00:05:29.631 00:05:29.631 Elapsed time = 0.228 seconds 00:05:29.631 00:05:29.631 real 0m0.277s 00:05:29.631 user 0m0.247s 00:05:29.631 sys 0m0.021s 00:05:29.631 11:59:17 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.631 11:59:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:29.631 ************************************ 00:05:29.631 END TEST env_memory 00:05:29.631 ************************************ 00:05:29.631 11:59:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:29.631 11:59:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.631 11:59:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.631 11:59:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.631 ************************************ 00:05:29.631 START TEST env_vtophys 00:05:29.631 ************************************ 00:05:29.631 11:59:17 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:29.631 EAL: lib.eal log level changed from notice to debug 00:05:29.631 EAL: Detected lcore 0 as core 0 on socket 0 00:05:29.631 EAL: Detected lcore 1 as core 0 on socket 0 00:05:29.631 EAL: Detected lcore 2 as core 0 on socket 0 00:05:29.631 EAL: Detected lcore 3 as core 0 on socket 0 00:05:29.631 EAL: Detected lcore 4 as core 0 on socket 0 00:05:29.631 EAL: Detected lcore 5 as core 0 on socket 0 00:05:29.631 EAL: Detected lcore 6 as core 0 on socket 0 00:05:29.631 EAL: Detected lcore 7 as core 0 on socket 0 00:05:29.631 EAL: Detected lcore 8 as core 0 on socket 0 00:05:29.631 EAL: Detected lcore 9 as core 0 on socket 0 00:05:29.631 EAL: Maximum logical cores by configuration: 128 00:05:29.631 EAL: Detected CPU lcores: 10 00:05:29.631 EAL: Detected NUMA nodes: 1 00:05:29.631 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:29.631 EAL: Detected shared linkage of DPDK 00:05:29.631 EAL: No shared files mode enabled, IPC will be disabled 00:05:29.631 EAL: Selected IOVA mode 'PA' 00:05:29.631 EAL: Probing VFIO support... 00:05:29.631 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:29.631 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:29.631 EAL: Ask a virtual area of 0x2e000 bytes 00:05:29.631 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:29.631 EAL: Setting up physically contiguous memory... 00:05:29.631 EAL: Setting maximum number of open files to 524288 00:05:29.631 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:29.631 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:29.631 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.631 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:29.631 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.631 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.631 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:29.631 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:29.631 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.631 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:29.631 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.631 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.631 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:29.631 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:29.631 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.631 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:29.631 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.631 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.631 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:29.631 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:29.631 EAL: Ask a virtual area of 0x61000 bytes 00:05:29.631 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:29.631 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:29.631 EAL: Ask a virtual area of 0x400000000 bytes 00:05:29.631 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:29.631 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:29.631 EAL: Hugepages will be freed exactly as allocated. 00:05:29.631 EAL: No shared files mode enabled, IPC is disabled 00:05:29.631 EAL: No shared files mode enabled, IPC is disabled 00:05:29.890 EAL: TSC frequency is ~2490000 KHz 00:05:29.890 EAL: Main lcore 0 is ready (tid=7f16fb01fa40;cpuset=[0]) 00:05:29.890 EAL: Trying to obtain current memory policy. 00:05:29.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.890 EAL: Restoring previous memory policy: 0 00:05:29.890 EAL: request: mp_malloc_sync 00:05:29.890 EAL: No shared files mode enabled, IPC is disabled 00:05:29.890 EAL: Heap on socket 0 was expanded by 2MB 00:05:29.890 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:29.890 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:29.890 EAL: Mem event callback 'spdk:(nil)' registered 00:05:29.890 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:29.890 00:05:29.890 00:05:29.890 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.890 http://cunit.sourceforge.net/ 00:05:29.890 00:05:29.890 00:05:29.890 Suite: components_suite 00:05:30.150 Test: vtophys_malloc_test ...passed 00:05:30.150 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:30.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.150 EAL: Restoring previous memory policy: 4 00:05:30.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.150 EAL: request: mp_malloc_sync 00:05:30.150 EAL: No shared files mode enabled, IPC is disabled 00:05:30.150 EAL: Heap on socket 0 was expanded by 4MB 00:05:30.150 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.150 EAL: request: mp_malloc_sync 00:05:30.150 EAL: No shared files mode enabled, IPC is disabled 00:05:30.150 EAL: Heap on socket 0 was shrunk by 4MB 00:05:30.409 EAL: Trying to obtain current memory policy. 00:05:30.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.409 EAL: Restoring previous memory policy: 4 00:05:30.409 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.409 EAL: request: mp_malloc_sync 00:05:30.409 EAL: No shared files mode enabled, IPC is disabled 00:05:30.409 EAL: Heap on socket 0 was expanded by 6MB 00:05:30.409 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.409 EAL: request: mp_malloc_sync 00:05:30.409 EAL: No shared files mode enabled, IPC is disabled 00:05:30.409 EAL: Heap on socket 0 was shrunk by 6MB 00:05:30.409 EAL: Trying to obtain current memory policy. 00:05:30.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.409 EAL: Restoring previous memory policy: 4 00:05:30.409 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.409 EAL: request: mp_malloc_sync 00:05:30.409 EAL: No shared files mode enabled, IPC is disabled 00:05:30.409 EAL: Heap on socket 0 was expanded by 10MB 00:05:30.409 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.409 EAL: request: mp_malloc_sync 00:05:30.409 EAL: No shared files mode enabled, IPC is disabled 00:05:30.409 EAL: Heap on socket 0 was shrunk by 10MB 00:05:30.409 EAL: Trying to obtain current memory policy. 00:05:30.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.409 EAL: Restoring previous memory policy: 4 00:05:30.409 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.409 EAL: request: mp_malloc_sync 00:05:30.409 EAL: No shared files mode enabled, IPC is disabled 00:05:30.409 EAL: Heap on socket 0 was expanded by 18MB 00:05:30.409 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.409 EAL: request: mp_malloc_sync 00:05:30.409 EAL: No shared files mode enabled, IPC is disabled 00:05:30.409 EAL: Heap on socket 0 was shrunk by 18MB 00:05:30.409 EAL: Trying to obtain current memory policy. 00:05:30.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.409 EAL: Restoring previous memory policy: 4 00:05:30.409 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.409 EAL: request: mp_malloc_sync 00:05:30.409 EAL: No shared files mode enabled, IPC is disabled 00:05:30.409 EAL: Heap on socket 0 was expanded by 34MB 00:05:30.409 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.409 EAL: request: mp_malloc_sync 00:05:30.409 EAL: No shared files mode enabled, IPC is disabled 00:05:30.409 EAL: Heap on socket 0 was shrunk by 34MB 00:05:30.668 EAL: Trying to obtain current memory policy. 00:05:30.668 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.668 EAL: Restoring previous memory policy: 4 00:05:30.668 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.668 EAL: request: mp_malloc_sync 00:05:30.668 EAL: No shared files mode enabled, IPC is disabled 00:05:30.668 EAL: Heap on socket 0 was expanded by 66MB 00:05:30.668 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.668 EAL: request: mp_malloc_sync 00:05:30.668 EAL: No shared files mode enabled, IPC is disabled 00:05:30.668 EAL: Heap on socket 0 was shrunk by 66MB 00:05:30.927 EAL: Trying to obtain current memory policy. 00:05:30.927 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.927 EAL: Restoring previous memory policy: 4 00:05:30.927 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.927 EAL: request: mp_malloc_sync 00:05:30.927 EAL: No shared files mode enabled, IPC is disabled 00:05:30.927 EAL: Heap on socket 0 was expanded by 130MB 00:05:31.184 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.184 EAL: request: mp_malloc_sync 00:05:31.184 EAL: No shared files mode enabled, IPC is disabled 00:05:31.184 EAL: Heap on socket 0 was shrunk by 130MB 00:05:31.442 EAL: Trying to obtain current memory policy. 00:05:31.442 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.442 EAL: Restoring previous memory policy: 4 00:05:31.442 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.442 EAL: request: mp_malloc_sync 00:05:31.442 EAL: No shared files mode enabled, IPC is disabled 00:05:31.442 EAL: Heap on socket 0 was expanded by 258MB 00:05:32.008 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.008 EAL: request: mp_malloc_sync 00:05:32.008 EAL: No shared files mode enabled, IPC is disabled 00:05:32.008 EAL: Heap on socket 0 was shrunk by 258MB 00:05:32.266 EAL: Trying to obtain current memory policy. 00:05:32.266 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:32.525 EAL: Restoring previous memory policy: 4 00:05:32.525 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.525 EAL: request: mp_malloc_sync 00:05:32.525 EAL: No shared files mode enabled, IPC is disabled 00:05:32.525 EAL: Heap on socket 0 was expanded by 514MB 00:05:33.901 EAL: Calling mem event callback 'spdk:(nil)' 00:05:33.901 EAL: request: mp_malloc_sync 00:05:33.901 EAL: No shared files mode enabled, IPC is disabled 00:05:33.901 EAL: Heap on socket 0 was shrunk by 514MB 00:05:34.468 EAL: Trying to obtain current memory policy. 00:05:34.468 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.726 EAL: Restoring previous memory policy: 4 00:05:34.726 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.726 EAL: request: mp_malloc_sync 00:05:34.726 EAL: No shared files mode enabled, IPC is disabled 00:05:34.726 EAL: Heap on socket 0 was expanded by 1026MB 00:05:37.259 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.259 EAL: request: mp_malloc_sync 00:05:37.259 EAL: No shared files mode enabled, IPC is disabled 00:05:37.259 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:39.162 passed 00:05:39.162 00:05:39.162 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.162 suites 1 1 n/a 0 0 00:05:39.162 tests 2 2 2 0 0 00:05:39.162 asserts 5411 5411 5411 0 n/a 00:05:39.162 00:05:39.162 Elapsed time = 8.945 seconds 00:05:39.162 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.162 EAL: request: mp_malloc_sync 00:05:39.162 EAL: No shared files mode enabled, IPC is disabled 00:05:39.162 EAL: Heap on socket 0 was shrunk by 2MB 00:05:39.162 EAL: No shared files mode enabled, IPC is disabled 00:05:39.162 EAL: No shared files mode enabled, IPC is disabled 00:05:39.162 EAL: No shared files mode enabled, IPC is disabled 00:05:39.162 00:05:39.162 real 0m9.262s 00:05:39.162 user 0m8.213s 00:05:39.162 sys 0m0.885s 00:05:39.162 11:59:26 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.162 11:59:26 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:39.162 ************************************ 00:05:39.162 END TEST env_vtophys 00:05:39.162 ************************************ 00:05:39.162 11:59:26 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:39.162 11:59:26 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.162 11:59:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.162 11:59:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.162 ************************************ 00:05:39.162 START TEST env_pci 00:05:39.162 ************************************ 00:05:39.162 11:59:26 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:39.162 00:05:39.162 00:05:39.162 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.162 http://cunit.sourceforge.net/ 00:05:39.162 00:05:39.162 00:05:39.162 Suite: pci 00:05:39.162 Test: pci_hook ...[2024-07-26 11:59:26.822087] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61895 has claimed it 00:05:39.162 passed 00:05:39.162 00:05:39.162 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.162 suites 1 1 n/a 0 0 00:05:39.162 tests 1 1 1 0 0 00:05:39.162 asserts 25 25 25 0 n/a 00:05:39.162 00:05:39.162 Elapsed time = 0.009 seconds 00:05:39.162 EAL: Cannot find device (10000:00:01.0) 00:05:39.162 EAL: Failed to attach device on primary process 00:05:39.162 00:05:39.162 real 0m0.115s 00:05:39.162 user 0m0.055s 00:05:39.162 sys 0m0.060s 00:05:39.162 11:59:26 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.162 11:59:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:39.162 ************************************ 00:05:39.162 END TEST env_pci 00:05:39.162 ************************************ 00:05:39.162 11:59:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:39.162 11:59:26 env -- env/env.sh@15 -- # uname 00:05:39.162 11:59:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:39.162 11:59:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:39.162 11:59:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.162 11:59:26 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:39.162 11:59:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.162 11:59:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.162 ************************************ 00:05:39.162 START TEST env_dpdk_post_init 00:05:39.162 ************************************ 00:05:39.162 11:59:26 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.162 EAL: Detected CPU lcores: 10 00:05:39.162 EAL: Detected NUMA nodes: 1 00:05:39.162 EAL: Detected shared linkage of DPDK 00:05:39.162 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.162 EAL: Selected IOVA mode 'PA' 00:05:39.420 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.421 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:39.421 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:39.421 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:39.421 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:39.421 Starting DPDK initialization... 00:05:39.421 Starting SPDK post initialization... 00:05:39.421 SPDK NVMe probe 00:05:39.421 Attaching to 0000:00:10.0 00:05:39.421 Attaching to 0000:00:11.0 00:05:39.421 Attaching to 0000:00:12.0 00:05:39.421 Attaching to 0000:00:13.0 00:05:39.421 Attached to 0000:00:10.0 00:05:39.421 Attached to 0000:00:11.0 00:05:39.421 Attached to 0000:00:13.0 00:05:39.421 Attached to 0000:00:12.0 00:05:39.421 Cleaning up... 00:05:39.421 00:05:39.421 real 0m0.297s 00:05:39.421 user 0m0.094s 00:05:39.421 sys 0m0.104s 00:05:39.421 11:59:27 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.421 11:59:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:39.421 ************************************ 00:05:39.421 END TEST env_dpdk_post_init 00:05:39.421 ************************************ 00:05:39.421 11:59:27 env -- env/env.sh@26 -- # uname 00:05:39.421 11:59:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:39.421 11:59:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.421 11:59:27 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.421 11:59:27 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.421 11:59:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.421 ************************************ 00:05:39.421 START TEST env_mem_callbacks 00:05:39.421 ************************************ 00:05:39.421 11:59:27 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.421 EAL: Detected CPU lcores: 10 00:05:39.421 EAL: Detected NUMA nodes: 1 00:05:39.421 EAL: Detected shared linkage of DPDK 00:05:39.679 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.679 EAL: Selected IOVA mode 'PA' 00:05:39.679 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.679 00:05:39.679 00:05:39.679 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.679 http://cunit.sourceforge.net/ 00:05:39.679 00:05:39.679 00:05:39.679 Suite: memory 00:05:39.679 Test: test ... 00:05:39.679 register 0x200000200000 2097152 00:05:39.679 malloc 3145728 00:05:39.679 register 0x200000400000 4194304 00:05:39.679 buf 0x2000004fffc0 len 3145728 PASSED 00:05:39.679 malloc 64 00:05:39.679 buf 0x2000004ffec0 len 64 PASSED 00:05:39.679 malloc 4194304 00:05:39.679 register 0x200000800000 6291456 00:05:39.679 buf 0x2000009fffc0 len 4194304 PASSED 00:05:39.679 free 0x2000004fffc0 3145728 00:05:39.679 free 0x2000004ffec0 64 00:05:39.679 unregister 0x200000400000 4194304 PASSED 00:05:39.679 free 0x2000009fffc0 4194304 00:05:39.679 unregister 0x200000800000 6291456 PASSED 00:05:39.679 malloc 8388608 00:05:39.679 register 0x200000400000 10485760 00:05:39.679 buf 0x2000005fffc0 len 8388608 PASSED 00:05:39.679 free 0x2000005fffc0 8388608 00:05:39.679 unregister 0x200000400000 10485760 PASSED 00:05:39.679 passed 00:05:39.679 00:05:39.679 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.679 suites 1 1 n/a 0 0 00:05:39.679 tests 1 1 1 0 0 00:05:39.679 asserts 15 15 15 0 n/a 00:05:39.679 00:05:39.679 Elapsed time = 0.086 seconds 00:05:39.679 00:05:39.679 real 0m0.294s 00:05:39.679 user 0m0.115s 00:05:39.679 sys 0m0.078s 00:05:39.679 ************************************ 00:05:39.679 END TEST env_mem_callbacks 00:05:39.679 11:59:27 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.679 11:59:27 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:39.679 ************************************ 00:05:39.973 00:05:39.973 real 0m10.731s 00:05:39.973 user 0m8.887s 00:05:39.973 sys 0m1.469s 00:05:39.973 11:59:27 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.973 11:59:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.973 ************************************ 00:05:39.973 END TEST env 00:05:39.973 ************************************ 00:05:39.973 11:59:27 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.973 11:59:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.973 11:59:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.973 11:59:27 -- common/autotest_common.sh@10 -- # set +x 00:05:39.973 ************************************ 00:05:39.973 START TEST rpc 00:05:39.973 ************************************ 00:05:39.973 11:59:27 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.973 * Looking for test storage... 00:05:39.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.974 11:59:27 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62014 00:05:39.974 11:59:27 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:39.974 11:59:27 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.974 11:59:27 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62014 00:05:39.974 11:59:27 rpc -- common/autotest_common.sh@831 -- # '[' -z 62014 ']' 00:05:39.974 11:59:27 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.974 11:59:27 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.974 11:59:27 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.974 11:59:27 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.974 11:59:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.232 [2024-07-26 11:59:28.011483] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:05:40.232 [2024-07-26 11:59:28.011612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62014 ] 00:05:40.232 [2024-07-26 11:59:28.185714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.492 [2024-07-26 11:59:28.442089] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:40.492 [2024-07-26 11:59:28.442178] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62014' to capture a snapshot of events at runtime. 00:05:40.492 [2024-07-26 11:59:28.442196] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:40.492 [2024-07-26 11:59:28.442208] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:40.492 [2024-07-26 11:59:28.442222] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62014 for offline analysis/debug. 00:05:40.492 [2024-07-26 11:59:28.442265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.429 11:59:29 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.429 11:59:29 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:41.429 11:59:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.429 11:59:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.429 11:59:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:41.429 11:59:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:41.429 11:59:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.429 11:59:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.429 11:59:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.688 ************************************ 00:05:41.688 START TEST rpc_integrity 00:05:41.688 ************************************ 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.688 { 00:05:41.688 "name": "Malloc0", 00:05:41.688 "aliases": [ 00:05:41.688 "e346f7bb-64ed-4bac-9ddc-6f8ed0d371d1" 00:05:41.688 ], 00:05:41.688 "product_name": "Malloc disk", 00:05:41.688 "block_size": 512, 00:05:41.688 "num_blocks": 16384, 00:05:41.688 "uuid": "e346f7bb-64ed-4bac-9ddc-6f8ed0d371d1", 00:05:41.688 "assigned_rate_limits": { 00:05:41.688 "rw_ios_per_sec": 0, 00:05:41.688 "rw_mbytes_per_sec": 0, 00:05:41.688 "r_mbytes_per_sec": 0, 00:05:41.688 "w_mbytes_per_sec": 0 00:05:41.688 }, 00:05:41.688 "claimed": false, 00:05:41.688 "zoned": false, 00:05:41.688 "supported_io_types": { 00:05:41.688 "read": true, 00:05:41.688 "write": true, 00:05:41.688 "unmap": true, 00:05:41.688 "flush": true, 00:05:41.688 "reset": true, 00:05:41.688 "nvme_admin": false, 00:05:41.688 "nvme_io": false, 00:05:41.688 "nvme_io_md": false, 00:05:41.688 "write_zeroes": true, 00:05:41.688 "zcopy": true, 00:05:41.688 "get_zone_info": false, 00:05:41.688 "zone_management": false, 00:05:41.688 "zone_append": false, 00:05:41.688 "compare": false, 00:05:41.688 "compare_and_write": false, 00:05:41.688 "abort": true, 00:05:41.688 "seek_hole": false, 00:05:41.688 "seek_data": false, 00:05:41.688 "copy": true, 00:05:41.688 "nvme_iov_md": false 00:05:41.688 }, 00:05:41.688 "memory_domains": [ 00:05:41.688 { 00:05:41.688 "dma_device_id": "system", 00:05:41.688 "dma_device_type": 1 00:05:41.688 }, 00:05:41.688 { 00:05:41.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.688 "dma_device_type": 2 00:05:41.688 } 00:05:41.688 ], 00:05:41.688 "driver_specific": {} 00:05:41.688 } 00:05:41.688 ]' 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.688 [2024-07-26 11:59:29.575542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:41.688 [2024-07-26 11:59:29.575655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.688 [2024-07-26 11:59:29.575693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:41.688 [2024-07-26 11:59:29.575707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.688 [2024-07-26 11:59:29.578427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.688 [2024-07-26 11:59:29.578489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.688 Passthru0 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.688 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.688 { 00:05:41.688 "name": "Malloc0", 00:05:41.688 "aliases": [ 00:05:41.688 "e346f7bb-64ed-4bac-9ddc-6f8ed0d371d1" 00:05:41.688 ], 00:05:41.688 "product_name": "Malloc disk", 00:05:41.688 "block_size": 512, 00:05:41.688 "num_blocks": 16384, 00:05:41.688 "uuid": "e346f7bb-64ed-4bac-9ddc-6f8ed0d371d1", 00:05:41.688 "assigned_rate_limits": { 00:05:41.688 "rw_ios_per_sec": 0, 00:05:41.688 "rw_mbytes_per_sec": 0, 00:05:41.688 "r_mbytes_per_sec": 0, 00:05:41.688 "w_mbytes_per_sec": 0 00:05:41.688 }, 00:05:41.688 "claimed": true, 00:05:41.688 "claim_type": "exclusive_write", 00:05:41.688 "zoned": false, 00:05:41.688 "supported_io_types": { 00:05:41.688 "read": true, 00:05:41.688 "write": true, 00:05:41.688 "unmap": true, 00:05:41.688 "flush": true, 00:05:41.688 "reset": true, 00:05:41.688 "nvme_admin": false, 00:05:41.688 "nvme_io": false, 00:05:41.688 "nvme_io_md": false, 00:05:41.688 "write_zeroes": true, 00:05:41.688 "zcopy": true, 00:05:41.688 "get_zone_info": false, 00:05:41.688 "zone_management": false, 00:05:41.688 "zone_append": false, 00:05:41.688 "compare": false, 00:05:41.688 "compare_and_write": false, 00:05:41.688 "abort": true, 00:05:41.688 "seek_hole": false, 00:05:41.688 "seek_data": false, 00:05:41.688 "copy": true, 00:05:41.688 "nvme_iov_md": false 00:05:41.688 }, 00:05:41.688 "memory_domains": [ 00:05:41.688 { 00:05:41.688 "dma_device_id": "system", 00:05:41.688 "dma_device_type": 1 00:05:41.688 }, 00:05:41.688 { 00:05:41.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.688 "dma_device_type": 2 00:05:41.688 } 00:05:41.688 ], 00:05:41.688 "driver_specific": {} 00:05:41.688 }, 00:05:41.688 { 00:05:41.688 "name": "Passthru0", 00:05:41.688 "aliases": [ 00:05:41.688 "5a9fffd0-f822-5eb5-9326-2bbc70653eeb" 00:05:41.688 ], 00:05:41.688 "product_name": "passthru", 00:05:41.688 "block_size": 512, 00:05:41.688 "num_blocks": 16384, 00:05:41.688 "uuid": "5a9fffd0-f822-5eb5-9326-2bbc70653eeb", 00:05:41.688 "assigned_rate_limits": { 00:05:41.688 "rw_ios_per_sec": 0, 00:05:41.688 "rw_mbytes_per_sec": 0, 00:05:41.688 "r_mbytes_per_sec": 0, 00:05:41.688 "w_mbytes_per_sec": 0 00:05:41.688 }, 00:05:41.688 "claimed": false, 00:05:41.688 "zoned": false, 00:05:41.688 "supported_io_types": { 00:05:41.688 "read": true, 00:05:41.688 "write": true, 00:05:41.688 "unmap": true, 00:05:41.688 "flush": true, 00:05:41.688 "reset": true, 00:05:41.688 "nvme_admin": false, 00:05:41.688 "nvme_io": false, 00:05:41.688 "nvme_io_md": false, 00:05:41.688 "write_zeroes": true, 00:05:41.688 "zcopy": true, 00:05:41.688 "get_zone_info": false, 00:05:41.688 "zone_management": false, 00:05:41.688 "zone_append": false, 00:05:41.688 "compare": false, 00:05:41.688 "compare_and_write": false, 00:05:41.688 "abort": true, 00:05:41.688 "seek_hole": false, 00:05:41.688 "seek_data": false, 00:05:41.688 "copy": true, 00:05:41.688 "nvme_iov_md": false 00:05:41.688 }, 00:05:41.688 "memory_domains": [ 00:05:41.688 { 00:05:41.688 "dma_device_id": "system", 00:05:41.688 "dma_device_type": 1 00:05:41.688 }, 00:05:41.688 { 00:05:41.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.688 "dma_device_type": 2 00:05:41.688 } 00:05:41.688 ], 00:05:41.688 "driver_specific": { 00:05:41.688 "passthru": { 00:05:41.688 "name": "Passthru0", 00:05:41.688 "base_bdev_name": "Malloc0" 00:05:41.688 } 00:05:41.688 } 00:05:41.688 } 00:05:41.688 ]' 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.688 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.689 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.689 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.947 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.947 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:41.947 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.947 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.947 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.947 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.947 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.947 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.947 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.947 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.947 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:41.947 11:59:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.947 00:05:41.947 real 0m0.372s 00:05:41.947 user 0m0.193s 00:05:41.947 sys 0m0.062s 00:05:41.947 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.947 11:59:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.947 ************************************ 00:05:41.947 END TEST rpc_integrity 00:05:41.947 ************************************ 00:05:41.947 11:59:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:41.947 11:59:29 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.947 11:59:29 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.947 11:59:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.947 ************************************ 00:05:41.947 START TEST rpc_plugins 00:05:41.947 ************************************ 00:05:41.947 11:59:29 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:41.947 11:59:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:41.947 11:59:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.947 11:59:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.947 11:59:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.947 11:59:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:41.947 11:59:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:41.947 11:59:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.947 11:59:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.947 11:59:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.947 11:59:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:41.947 { 00:05:41.947 "name": "Malloc1", 00:05:41.947 "aliases": [ 00:05:41.947 "eda7f163-ed13-4183-ba45-2ce063166ff2" 00:05:41.947 ], 00:05:41.947 "product_name": "Malloc disk", 00:05:41.947 "block_size": 4096, 00:05:41.947 "num_blocks": 256, 00:05:41.947 "uuid": "eda7f163-ed13-4183-ba45-2ce063166ff2", 00:05:41.947 "assigned_rate_limits": { 00:05:41.947 "rw_ios_per_sec": 0, 00:05:41.947 "rw_mbytes_per_sec": 0, 00:05:41.947 "r_mbytes_per_sec": 0, 00:05:41.947 "w_mbytes_per_sec": 0 00:05:41.947 }, 00:05:41.947 "claimed": false, 00:05:41.947 "zoned": false, 00:05:41.947 "supported_io_types": { 00:05:41.947 "read": true, 00:05:41.947 "write": true, 00:05:41.947 "unmap": true, 00:05:41.947 "flush": true, 00:05:41.947 "reset": true, 00:05:41.947 "nvme_admin": false, 00:05:41.947 "nvme_io": false, 00:05:41.947 "nvme_io_md": false, 00:05:41.947 "write_zeroes": true, 00:05:41.947 "zcopy": true, 00:05:41.947 "get_zone_info": false, 00:05:41.947 "zone_management": false, 00:05:41.947 "zone_append": false, 00:05:41.947 "compare": false, 00:05:41.947 "compare_and_write": false, 00:05:41.947 "abort": true, 00:05:41.947 "seek_hole": false, 00:05:41.947 "seek_data": false, 00:05:41.947 "copy": true, 00:05:41.947 "nvme_iov_md": false 00:05:41.947 }, 00:05:41.947 "memory_domains": [ 00:05:41.947 { 00:05:41.947 "dma_device_id": "system", 00:05:41.947 "dma_device_type": 1 00:05:41.947 }, 00:05:41.947 { 00:05:41.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.947 "dma_device_type": 2 00:05:41.947 } 00:05:41.947 ], 00:05:41.947 "driver_specific": {} 00:05:41.947 } 00:05:41.947 ]' 00:05:41.948 11:59:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:42.206 11:59:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:42.206 11:59:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:42.206 11:59:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.206 11:59:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.206 11:59:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.206 11:59:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:42.206 11:59:29 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.206 11:59:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.206 11:59:29 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.206 11:59:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:42.206 11:59:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:42.206 11:59:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:42.206 00:05:42.206 real 0m0.171s 00:05:42.206 user 0m0.093s 00:05:42.206 sys 0m0.030s 00:05:42.206 11:59:30 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.206 11:59:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:42.206 ************************************ 00:05:42.206 END TEST rpc_plugins 00:05:42.206 ************************************ 00:05:42.206 11:59:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:42.206 11:59:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.206 11:59:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.206 11:59:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.206 ************************************ 00:05:42.207 START TEST rpc_trace_cmd_test 00:05:42.207 ************************************ 00:05:42.207 11:59:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:42.207 11:59:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:42.207 11:59:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:42.207 11:59:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.207 11:59:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.207 11:59:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.207 11:59:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:42.207 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62014", 00:05:42.207 "tpoint_group_mask": "0x8", 00:05:42.207 "iscsi_conn": { 00:05:42.207 "mask": "0x2", 00:05:42.207 "tpoint_mask": "0x0" 00:05:42.207 }, 00:05:42.207 "scsi": { 00:05:42.207 "mask": "0x4", 00:05:42.207 "tpoint_mask": "0x0" 00:05:42.207 }, 00:05:42.207 "bdev": { 00:05:42.207 "mask": "0x8", 00:05:42.207 "tpoint_mask": "0xffffffffffffffff" 00:05:42.207 }, 00:05:42.207 "nvmf_rdma": { 00:05:42.207 "mask": "0x10", 00:05:42.207 "tpoint_mask": "0x0" 00:05:42.207 }, 00:05:42.207 "nvmf_tcp": { 00:05:42.207 "mask": "0x20", 00:05:42.207 "tpoint_mask": "0x0" 00:05:42.207 }, 00:05:42.207 "ftl": { 00:05:42.207 "mask": "0x40", 00:05:42.207 "tpoint_mask": "0x0" 00:05:42.207 }, 00:05:42.207 "blobfs": { 00:05:42.207 "mask": "0x80", 00:05:42.207 "tpoint_mask": "0x0" 00:05:42.207 }, 00:05:42.207 "dsa": { 00:05:42.207 "mask": "0x200", 00:05:42.207 "tpoint_mask": "0x0" 00:05:42.207 }, 00:05:42.207 "thread": { 00:05:42.207 "mask": "0x400", 00:05:42.207 "tpoint_mask": "0x0" 00:05:42.207 }, 00:05:42.207 "nvme_pcie": { 00:05:42.207 "mask": "0x800", 00:05:42.207 "tpoint_mask": "0x0" 00:05:42.207 }, 00:05:42.207 "iaa": { 00:05:42.207 "mask": "0x1000", 00:05:42.207 "tpoint_mask": "0x0" 00:05:42.207 }, 00:05:42.207 "nvme_tcp": { 00:05:42.207 "mask": "0x2000", 00:05:42.207 "tpoint_mask": "0x0" 00:05:42.207 }, 00:05:42.207 "bdev_nvme": { 00:05:42.207 "mask": "0x4000", 00:05:42.207 "tpoint_mask": "0x0" 00:05:42.207 }, 00:05:42.207 "sock": { 00:05:42.207 "mask": "0x8000", 00:05:42.207 "tpoint_mask": "0x0" 00:05:42.207 } 00:05:42.207 }' 00:05:42.207 11:59:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:42.207 11:59:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:42.207 11:59:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:42.466 11:59:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:42.466 11:59:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:42.466 11:59:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:42.466 11:59:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:42.466 11:59:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.466 11:59:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.466 ************************************ 00:05:42.466 END TEST rpc_trace_cmd_test 00:05:42.466 ************************************ 00:05:42.466 11:59:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.466 00:05:42.466 real 0m0.240s 00:05:42.466 user 0m0.189s 00:05:42.466 sys 0m0.041s 00:05:42.466 11:59:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.466 11:59:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:42.466 11:59:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.466 11:59:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.466 11:59:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.466 11:59:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.466 11:59:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.466 11:59:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.466 ************************************ 00:05:42.466 START TEST rpc_daemon_integrity 00:05:42.466 ************************************ 00:05:42.466 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:42.466 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.466 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.466 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.466 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.466 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.466 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:42.725 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.725 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.725 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.725 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.725 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.725 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.725 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.725 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.725 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.725 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.725 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.725 { 00:05:42.725 "name": "Malloc2", 00:05:42.725 "aliases": [ 00:05:42.725 "30d30264-cffe-4d66-9ffa-ee67f9f61dbf" 00:05:42.725 ], 00:05:42.725 "product_name": "Malloc disk", 00:05:42.725 "block_size": 512, 00:05:42.725 "num_blocks": 16384, 00:05:42.725 "uuid": "30d30264-cffe-4d66-9ffa-ee67f9f61dbf", 00:05:42.725 "assigned_rate_limits": { 00:05:42.725 "rw_ios_per_sec": 0, 00:05:42.725 "rw_mbytes_per_sec": 0, 00:05:42.725 "r_mbytes_per_sec": 0, 00:05:42.725 "w_mbytes_per_sec": 0 00:05:42.725 }, 00:05:42.725 "claimed": false, 00:05:42.725 "zoned": false, 00:05:42.725 "supported_io_types": { 00:05:42.725 "read": true, 00:05:42.725 "write": true, 00:05:42.725 "unmap": true, 00:05:42.725 "flush": true, 00:05:42.725 "reset": true, 00:05:42.725 "nvme_admin": false, 00:05:42.725 "nvme_io": false, 00:05:42.725 "nvme_io_md": false, 00:05:42.725 "write_zeroes": true, 00:05:42.725 "zcopy": true, 00:05:42.725 "get_zone_info": false, 00:05:42.725 "zone_management": false, 00:05:42.725 "zone_append": false, 00:05:42.725 "compare": false, 00:05:42.725 "compare_and_write": false, 00:05:42.725 "abort": true, 00:05:42.725 "seek_hole": false, 00:05:42.725 "seek_data": false, 00:05:42.725 "copy": true, 00:05:42.725 "nvme_iov_md": false 00:05:42.725 }, 00:05:42.725 "memory_domains": [ 00:05:42.725 { 00:05:42.725 "dma_device_id": "system", 00:05:42.725 "dma_device_type": 1 00:05:42.725 }, 00:05:42.725 { 00:05:42.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.726 "dma_device_type": 2 00:05:42.726 } 00:05:42.726 ], 00:05:42.726 "driver_specific": {} 00:05:42.726 } 00:05:42.726 ]' 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.726 [2024-07-26 11:59:30.551599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:42.726 [2024-07-26 11:59:30.551677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.726 [2024-07-26 11:59:30.551704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:42.726 [2024-07-26 11:59:30.551716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.726 [2024-07-26 11:59:30.554088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.726 [2024-07-26 11:59:30.554150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.726 Passthru0 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.726 { 00:05:42.726 "name": "Malloc2", 00:05:42.726 "aliases": [ 00:05:42.726 "30d30264-cffe-4d66-9ffa-ee67f9f61dbf" 00:05:42.726 ], 00:05:42.726 "product_name": "Malloc disk", 00:05:42.726 "block_size": 512, 00:05:42.726 "num_blocks": 16384, 00:05:42.726 "uuid": "30d30264-cffe-4d66-9ffa-ee67f9f61dbf", 00:05:42.726 "assigned_rate_limits": { 00:05:42.726 "rw_ios_per_sec": 0, 00:05:42.726 "rw_mbytes_per_sec": 0, 00:05:42.726 "r_mbytes_per_sec": 0, 00:05:42.726 "w_mbytes_per_sec": 0 00:05:42.726 }, 00:05:42.726 "claimed": true, 00:05:42.726 "claim_type": "exclusive_write", 00:05:42.726 "zoned": false, 00:05:42.726 "supported_io_types": { 00:05:42.726 "read": true, 00:05:42.726 "write": true, 00:05:42.726 "unmap": true, 00:05:42.726 "flush": true, 00:05:42.726 "reset": true, 00:05:42.726 "nvme_admin": false, 00:05:42.726 "nvme_io": false, 00:05:42.726 "nvme_io_md": false, 00:05:42.726 "write_zeroes": true, 00:05:42.726 "zcopy": true, 00:05:42.726 "get_zone_info": false, 00:05:42.726 "zone_management": false, 00:05:42.726 "zone_append": false, 00:05:42.726 "compare": false, 00:05:42.726 "compare_and_write": false, 00:05:42.726 "abort": true, 00:05:42.726 "seek_hole": false, 00:05:42.726 "seek_data": false, 00:05:42.726 "copy": true, 00:05:42.726 "nvme_iov_md": false 00:05:42.726 }, 00:05:42.726 "memory_domains": [ 00:05:42.726 { 00:05:42.726 "dma_device_id": "system", 00:05:42.726 "dma_device_type": 1 00:05:42.726 }, 00:05:42.726 { 00:05:42.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.726 "dma_device_type": 2 00:05:42.726 } 00:05:42.726 ], 00:05:42.726 "driver_specific": {} 00:05:42.726 }, 00:05:42.726 { 00:05:42.726 "name": "Passthru0", 00:05:42.726 "aliases": [ 00:05:42.726 "7b7b9fea-1c18-5056-8914-fdc2490df1d8" 00:05:42.726 ], 00:05:42.726 "product_name": "passthru", 00:05:42.726 "block_size": 512, 00:05:42.726 "num_blocks": 16384, 00:05:42.726 "uuid": "7b7b9fea-1c18-5056-8914-fdc2490df1d8", 00:05:42.726 "assigned_rate_limits": { 00:05:42.726 "rw_ios_per_sec": 0, 00:05:42.726 "rw_mbytes_per_sec": 0, 00:05:42.726 "r_mbytes_per_sec": 0, 00:05:42.726 "w_mbytes_per_sec": 0 00:05:42.726 }, 00:05:42.726 "claimed": false, 00:05:42.726 "zoned": false, 00:05:42.726 "supported_io_types": { 00:05:42.726 "read": true, 00:05:42.726 "write": true, 00:05:42.726 "unmap": true, 00:05:42.726 "flush": true, 00:05:42.726 "reset": true, 00:05:42.726 "nvme_admin": false, 00:05:42.726 "nvme_io": false, 00:05:42.726 "nvme_io_md": false, 00:05:42.726 "write_zeroes": true, 00:05:42.726 "zcopy": true, 00:05:42.726 "get_zone_info": false, 00:05:42.726 "zone_management": false, 00:05:42.726 "zone_append": false, 00:05:42.726 "compare": false, 00:05:42.726 "compare_and_write": false, 00:05:42.726 "abort": true, 00:05:42.726 "seek_hole": false, 00:05:42.726 "seek_data": false, 00:05:42.726 "copy": true, 00:05:42.726 "nvme_iov_md": false 00:05:42.726 }, 00:05:42.726 "memory_domains": [ 00:05:42.726 { 00:05:42.726 "dma_device_id": "system", 00:05:42.726 "dma_device_type": 1 00:05:42.726 }, 00:05:42.726 { 00:05:42.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.726 "dma_device_type": 2 00:05:42.726 } 00:05:42.726 ], 00:05:42.726 "driver_specific": { 00:05:42.726 "passthru": { 00:05:42.726 "name": "Passthru0", 00:05:42.726 "base_bdev_name": "Malloc2" 00:05:42.726 } 00:05:42.726 } 00:05:42.726 } 00:05:42.726 ]' 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.726 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:42.986 ************************************ 00:05:42.986 END TEST rpc_daemon_integrity 00:05:42.986 ************************************ 00:05:42.986 11:59:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.986 00:05:42.986 real 0m0.332s 00:05:42.986 user 0m0.179s 00:05:42.986 sys 0m0.050s 00:05:42.986 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.986 11:59:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:42.986 11:59:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:42.986 11:59:30 rpc -- rpc/rpc.sh@84 -- # killprocess 62014 00:05:42.986 11:59:30 rpc -- common/autotest_common.sh@950 -- # '[' -z 62014 ']' 00:05:42.986 11:59:30 rpc -- common/autotest_common.sh@954 -- # kill -0 62014 00:05:42.986 11:59:30 rpc -- common/autotest_common.sh@955 -- # uname 00:05:42.986 11:59:30 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.986 11:59:30 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62014 00:05:42.986 killing process with pid 62014 00:05:42.986 11:59:30 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.986 11:59:30 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.986 11:59:30 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62014' 00:05:42.986 11:59:30 rpc -- common/autotest_common.sh@969 -- # kill 62014 00:05:42.986 11:59:30 rpc -- common/autotest_common.sh@974 -- # wait 62014 00:05:45.517 00:05:45.517 real 0m5.575s 00:05:45.517 user 0m6.121s 00:05:45.517 sys 0m0.951s 00:05:45.517 11:59:33 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.517 ************************************ 00:05:45.517 END TEST rpc 00:05:45.517 ************************************ 00:05:45.517 11:59:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.517 11:59:33 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:45.517 11:59:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.517 11:59:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.517 11:59:33 -- common/autotest_common.sh@10 -- # set +x 00:05:45.517 ************************************ 00:05:45.517 START TEST skip_rpc 00:05:45.517 ************************************ 00:05:45.517 11:59:33 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:45.517 * Looking for test storage... 00:05:45.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:45.775 11:59:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:45.775 11:59:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:45.775 11:59:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:45.775 11:59:33 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.775 11:59:33 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.775 11:59:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.775 ************************************ 00:05:45.775 START TEST skip_rpc 00:05:45.775 ************************************ 00:05:45.775 11:59:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:45.775 11:59:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62246 00:05:45.775 11:59:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.775 11:59:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:45.775 11:59:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:45.775 [2024-07-26 11:59:33.622077] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:05:45.775 [2024-07-26 11:59:33.622226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62246 ] 00:05:46.033 [2024-07-26 11:59:33.795080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.292 [2024-07-26 11:59:34.043021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62246 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 62246 ']' 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 62246 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62246 00:05:51.557 killing process with pid 62246 00:05:51.557 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.558 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.558 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62246' 00:05:51.558 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 62246 00:05:51.558 11:59:38 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 62246 00:05:53.459 00:05:53.459 real 0m7.546s 00:05:53.459 user 0m7.036s 00:05:53.459 sys 0m0.412s 00:05:53.459 11:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.459 11:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.460 ************************************ 00:05:53.460 END TEST skip_rpc 00:05:53.460 ************************************ 00:05:53.460 11:59:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:53.460 11:59:41 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.460 11:59:41 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.460 11:59:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.460 ************************************ 00:05:53.460 START TEST skip_rpc_with_json 00:05:53.460 ************************************ 00:05:53.460 11:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:53.460 11:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:53.460 11:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62350 00:05:53.460 11:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.460 11:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.460 11:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62350 00:05:53.460 11:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 62350 ']' 00:05:53.460 11:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.460 11:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.460 11:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.460 11:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.460 11:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.460 [2024-07-26 11:59:41.232829] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:05:53.460 [2024-07-26 11:59:41.234180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62350 ] 00:05:53.460 [2024-07-26 11:59:41.403239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.718 [2024-07-26 11:59:41.640583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.663 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.663 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:54.663 11:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:54.663 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.663 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.663 [2024-07-26 11:59:42.560803] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:54.663 request: 00:05:54.663 { 00:05:54.663 "trtype": "tcp", 00:05:54.663 "method": "nvmf_get_transports", 00:05:54.663 "req_id": 1 00:05:54.663 } 00:05:54.663 Got JSON-RPC error response 00:05:54.663 response: 00:05:54.663 { 00:05:54.663 "code": -19, 00:05:54.663 "message": "No such device" 00:05:54.663 } 00:05:54.663 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:54.663 11:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:54.663 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.663 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.663 [2024-07-26 11:59:42.576888] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.663 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.663 11:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:54.663 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.663 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.921 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.921 11:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.921 { 00:05:54.921 "subsystems": [ 00:05:54.921 { 00:05:54.921 "subsystem": "keyring", 00:05:54.921 "config": [] 00:05:54.921 }, 00:05:54.922 { 00:05:54.922 "subsystem": "iobuf", 00:05:54.922 "config": [ 00:05:54.922 { 00:05:54.922 "method": "iobuf_set_options", 00:05:54.922 "params": { 00:05:54.922 "small_pool_count": 8192, 00:05:54.922 "large_pool_count": 1024, 00:05:54.922 "small_bufsize": 8192, 00:05:54.922 "large_bufsize": 135168 00:05:54.922 } 00:05:54.922 } 00:05:54.922 ] 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "subsystem": "sock", 00:05:54.922 "config": [ 00:05:54.922 { 00:05:54.922 "method": "sock_set_default_impl", 00:05:54.922 "params": { 00:05:54.922 "impl_name": "posix" 00:05:54.922 } 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "method": "sock_impl_set_options", 00:05:54.922 "params": { 00:05:54.922 "impl_name": "ssl", 00:05:54.922 "recv_buf_size": 4096, 00:05:54.922 "send_buf_size": 4096, 00:05:54.922 "enable_recv_pipe": true, 00:05:54.922 "enable_quickack": false, 00:05:54.922 "enable_placement_id": 0, 00:05:54.922 "enable_zerocopy_send_server": true, 00:05:54.922 "enable_zerocopy_send_client": false, 00:05:54.922 "zerocopy_threshold": 0, 00:05:54.922 "tls_version": 0, 00:05:54.922 "enable_ktls": false 00:05:54.922 } 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "method": "sock_impl_set_options", 00:05:54.922 "params": { 00:05:54.922 "impl_name": "posix", 00:05:54.922 "recv_buf_size": 2097152, 00:05:54.922 "send_buf_size": 2097152, 00:05:54.922 "enable_recv_pipe": true, 00:05:54.922 "enable_quickack": false, 00:05:54.922 "enable_placement_id": 0, 00:05:54.922 "enable_zerocopy_send_server": true, 00:05:54.922 "enable_zerocopy_send_client": false, 00:05:54.922 "zerocopy_threshold": 0, 00:05:54.922 "tls_version": 0, 00:05:54.922 "enable_ktls": false 00:05:54.922 } 00:05:54.922 } 00:05:54.922 ] 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "subsystem": "vmd", 00:05:54.922 "config": [] 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "subsystem": "accel", 00:05:54.922 "config": [ 00:05:54.922 { 00:05:54.922 "method": "accel_set_options", 00:05:54.922 "params": { 00:05:54.922 "small_cache_size": 128, 00:05:54.922 "large_cache_size": 16, 00:05:54.922 "task_count": 2048, 00:05:54.922 "sequence_count": 2048, 00:05:54.922 "buf_count": 2048 00:05:54.922 } 00:05:54.922 } 00:05:54.922 ] 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "subsystem": "bdev", 00:05:54.922 "config": [ 00:05:54.922 { 00:05:54.922 "method": "bdev_set_options", 00:05:54.922 "params": { 00:05:54.922 "bdev_io_pool_size": 65535, 00:05:54.922 "bdev_io_cache_size": 256, 00:05:54.922 "bdev_auto_examine": true, 00:05:54.922 "iobuf_small_cache_size": 128, 00:05:54.922 "iobuf_large_cache_size": 16 00:05:54.922 } 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "method": "bdev_raid_set_options", 00:05:54.922 "params": { 00:05:54.922 "process_window_size_kb": 1024, 00:05:54.922 "process_max_bandwidth_mb_sec": 0 00:05:54.922 } 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "method": "bdev_iscsi_set_options", 00:05:54.922 "params": { 00:05:54.922 "timeout_sec": 30 00:05:54.922 } 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "method": "bdev_nvme_set_options", 00:05:54.922 "params": { 00:05:54.922 "action_on_timeout": "none", 00:05:54.922 "timeout_us": 0, 00:05:54.922 "timeout_admin_us": 0, 00:05:54.922 "keep_alive_timeout_ms": 10000, 00:05:54.922 "arbitration_burst": 0, 00:05:54.922 "low_priority_weight": 0, 00:05:54.922 "medium_priority_weight": 0, 00:05:54.922 "high_priority_weight": 0, 00:05:54.922 "nvme_adminq_poll_period_us": 10000, 00:05:54.922 "nvme_ioq_poll_period_us": 0, 00:05:54.922 "io_queue_requests": 0, 00:05:54.922 "delay_cmd_submit": true, 00:05:54.922 "transport_retry_count": 4, 00:05:54.922 "bdev_retry_count": 3, 00:05:54.922 "transport_ack_timeout": 0, 00:05:54.922 "ctrlr_loss_timeout_sec": 0, 00:05:54.922 "reconnect_delay_sec": 0, 00:05:54.922 "fast_io_fail_timeout_sec": 0, 00:05:54.922 "disable_auto_failback": false, 00:05:54.922 "generate_uuids": false, 00:05:54.922 "transport_tos": 0, 00:05:54.922 "nvme_error_stat": false, 00:05:54.922 "rdma_srq_size": 0, 00:05:54.922 "io_path_stat": false, 00:05:54.922 "allow_accel_sequence": false, 00:05:54.922 "rdma_max_cq_size": 0, 00:05:54.922 "rdma_cm_event_timeout_ms": 0, 00:05:54.922 "dhchap_digests": [ 00:05:54.922 "sha256", 00:05:54.922 "sha384", 00:05:54.922 "sha512" 00:05:54.922 ], 00:05:54.922 "dhchap_dhgroups": [ 00:05:54.922 "null", 00:05:54.922 "ffdhe2048", 00:05:54.922 "ffdhe3072", 00:05:54.922 "ffdhe4096", 00:05:54.922 "ffdhe6144", 00:05:54.922 "ffdhe8192" 00:05:54.922 ] 00:05:54.922 } 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "method": "bdev_nvme_set_hotplug", 00:05:54.922 "params": { 00:05:54.922 "period_us": 100000, 00:05:54.922 "enable": false 00:05:54.922 } 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "method": "bdev_wait_for_examine" 00:05:54.922 } 00:05:54.922 ] 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "subsystem": "scsi", 00:05:54.922 "config": null 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "subsystem": "scheduler", 00:05:54.922 "config": [ 00:05:54.922 { 00:05:54.922 "method": "framework_set_scheduler", 00:05:54.922 "params": { 00:05:54.922 "name": "static" 00:05:54.922 } 00:05:54.922 } 00:05:54.922 ] 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "subsystem": "vhost_scsi", 00:05:54.922 "config": [] 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "subsystem": "vhost_blk", 00:05:54.922 "config": [] 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "subsystem": "ublk", 00:05:54.922 "config": [] 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "subsystem": "nbd", 00:05:54.922 "config": [] 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "subsystem": "nvmf", 00:05:54.922 "config": [ 00:05:54.922 { 00:05:54.922 "method": "nvmf_set_config", 00:05:54.922 "params": { 00:05:54.922 "discovery_filter": "match_any", 00:05:54.922 "admin_cmd_passthru": { 00:05:54.922 "identify_ctrlr": false 00:05:54.922 } 00:05:54.922 } 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "method": "nvmf_set_max_subsystems", 00:05:54.922 "params": { 00:05:54.922 "max_subsystems": 1024 00:05:54.922 } 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "method": "nvmf_set_crdt", 00:05:54.922 "params": { 00:05:54.922 "crdt1": 0, 00:05:54.922 "crdt2": 0, 00:05:54.922 "crdt3": 0 00:05:54.922 } 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "method": "nvmf_create_transport", 00:05:54.922 "params": { 00:05:54.922 "trtype": "TCP", 00:05:54.922 "max_queue_depth": 128, 00:05:54.922 "max_io_qpairs_per_ctrlr": 127, 00:05:54.922 "in_capsule_data_size": 4096, 00:05:54.922 "max_io_size": 131072, 00:05:54.922 "io_unit_size": 131072, 00:05:54.922 "max_aq_depth": 128, 00:05:54.922 "num_shared_buffers": 511, 00:05:54.922 "buf_cache_size": 4294967295, 00:05:54.922 "dif_insert_or_strip": false, 00:05:54.922 "zcopy": false, 00:05:54.922 "c2h_success": true, 00:05:54.922 "sock_priority": 0, 00:05:54.922 "abort_timeout_sec": 1, 00:05:54.922 "ack_timeout": 0, 00:05:54.922 "data_wr_pool_size": 0 00:05:54.922 } 00:05:54.922 } 00:05:54.922 ] 00:05:54.922 }, 00:05:54.922 { 00:05:54.922 "subsystem": "iscsi", 00:05:54.922 "config": [ 00:05:54.922 { 00:05:54.922 "method": "iscsi_set_options", 00:05:54.922 "params": { 00:05:54.922 "node_base": "iqn.2016-06.io.spdk", 00:05:54.922 "max_sessions": 128, 00:05:54.922 "max_connections_per_session": 2, 00:05:54.922 "max_queue_depth": 64, 00:05:54.922 "default_time2wait": 2, 00:05:54.922 "default_time2retain": 20, 00:05:54.922 "first_burst_length": 8192, 00:05:54.922 "immediate_data": true, 00:05:54.922 "allow_duplicated_isid": false, 00:05:54.922 "error_recovery_level": 0, 00:05:54.922 "nop_timeout": 60, 00:05:54.922 "nop_in_interval": 30, 00:05:54.922 "disable_chap": false, 00:05:54.922 "require_chap": false, 00:05:54.922 "mutual_chap": false, 00:05:54.922 "chap_group": 0, 00:05:54.922 "max_large_datain_per_connection": 64, 00:05:54.922 "max_r2t_per_connection": 4, 00:05:54.922 "pdu_pool_size": 36864, 00:05:54.922 "immediate_data_pool_size": 16384, 00:05:54.922 "data_out_pool_size": 2048 00:05:54.922 } 00:05:54.922 } 00:05:54.922 ] 00:05:54.922 } 00:05:54.922 ] 00:05:54.922 } 00:05:54.922 11:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:54.922 11:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62350 00:05:54.922 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62350 ']' 00:05:54.923 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62350 00:05:54.923 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:54.923 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.923 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62350 00:05:54.923 killing process with pid 62350 00:05:54.923 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.923 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.923 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62350' 00:05:54.923 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62350 00:05:54.923 11:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62350 00:05:57.455 11:59:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62406 00:05:57.455 11:59:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:57.455 11:59:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:02.727 11:59:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62406 00:06:02.727 11:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62406 ']' 00:06:02.727 11:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62406 00:06:02.727 11:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:02.727 11:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.727 11:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62406 00:06:02.727 killing process with pid 62406 00:06:02.727 11:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.727 11:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.727 11:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62406' 00:06:02.727 11:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62406 00:06:02.727 11:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62406 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:05.259 00:06:05.259 real 0m11.661s 00:06:05.259 user 0m11.084s 00:06:05.259 sys 0m0.863s 00:06:05.259 ************************************ 00:06:05.259 END TEST skip_rpc_with_json 00:06:05.259 ************************************ 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:05.259 11:59:52 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:05.259 11:59:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.259 11:59:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.259 11:59:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.259 ************************************ 00:06:05.259 START TEST skip_rpc_with_delay 00:06:05.259 ************************************ 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:05.259 11:59:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:05.259 [2024-07-26 11:59:52.972169] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:05.259 [2024-07-26 11:59:52.972312] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:05.259 11:59:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:05.259 11:59:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.259 11:59:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.259 11:59:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.259 00:06:05.259 real 0m0.179s 00:06:05.259 user 0m0.094s 00:06:05.259 sys 0m0.083s 00:06:05.259 11:59:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.259 ************************************ 00:06:05.259 END TEST skip_rpc_with_delay 00:06:05.259 ************************************ 00:06:05.259 11:59:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:05.259 11:59:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:05.259 11:59:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:05.259 11:59:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:05.259 11:59:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.259 11:59:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.259 11:59:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.259 ************************************ 00:06:05.259 START TEST exit_on_failed_rpc_init 00:06:05.259 ************************************ 00:06:05.259 11:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:05.259 11:59:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62540 00:06:05.259 11:59:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62540 00:06:05.259 11:59:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.259 11:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 62540 ']' 00:06:05.259 11:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.259 11:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.259 11:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.259 11:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.259 11:59:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.259 [2024-07-26 11:59:53.216040] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:06:05.259 [2024-07-26 11:59:53.216411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62540 ] 00:06:05.522 [2024-07-26 11:59:53.380733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.781 [2024-07-26 11:59:53.610497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.718 11:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.718 11:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:06.718 11:59:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.718 11:59:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:06.718 11:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:06.718 11:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:06.718 11:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.718 11:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.718 11:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.718 11:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.718 11:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.718 11:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.718 11:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.718 11:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:06.718 11:59:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:06.718 [2024-07-26 11:59:54.683256] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:06:06.718 [2024-07-26 11:59:54.684052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62563 ] 00:06:06.978 [2024-07-26 11:59:54.883112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.237 [2024-07-26 11:59:55.122500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.237 [2024-07-26 11:59:55.122607] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:07.237 [2024-07-26 11:59:55.122634] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:07.237 [2024-07-26 11:59:55.122649] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.804 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:07.804 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.804 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:07.804 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:07.804 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:07.804 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.804 11:59:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:07.804 11:59:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62540 00:06:07.805 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 62540 ']' 00:06:07.805 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 62540 00:06:07.805 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:07.805 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.805 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62540 00:06:07.805 killing process with pid 62540 00:06:07.805 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.805 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.805 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62540' 00:06:07.805 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 62540 00:06:07.805 11:59:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 62540 00:06:10.343 00:06:10.343 real 0m5.039s 00:06:10.343 user 0m5.616s 00:06:10.343 sys 0m0.646s 00:06:10.343 11:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.343 11:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.343 ************************************ 00:06:10.343 END TEST exit_on_failed_rpc_init 00:06:10.343 ************************************ 00:06:10.343 11:59:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:10.343 ************************************ 00:06:10.343 END TEST skip_rpc 00:06:10.343 ************************************ 00:06:10.343 00:06:10.343 real 0m24.816s 00:06:10.343 user 0m23.964s 00:06:10.343 sys 0m2.269s 00:06:10.343 11:59:58 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.343 11:59:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.343 11:59:58 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:10.343 11:59:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.343 11:59:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.343 11:59:58 -- common/autotest_common.sh@10 -- # set +x 00:06:10.343 ************************************ 00:06:10.343 START TEST rpc_client 00:06:10.343 ************************************ 00:06:10.343 11:59:58 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:10.602 * Looking for test storage... 00:06:10.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:10.602 11:59:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:10.602 OK 00:06:10.602 11:59:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:10.602 00:06:10.602 real 0m0.212s 00:06:10.602 user 0m0.093s 00:06:10.602 sys 0m0.130s 00:06:10.602 11:59:58 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.602 11:59:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:10.602 ************************************ 00:06:10.602 END TEST rpc_client 00:06:10.602 ************************************ 00:06:10.602 11:59:58 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:10.602 11:59:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.602 11:59:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.602 11:59:58 -- common/autotest_common.sh@10 -- # set +x 00:06:10.602 ************************************ 00:06:10.602 START TEST json_config 00:06:10.602 ************************************ 00:06:10.602 11:59:58 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:10.861 11:59:58 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:681cafa1-0731-46ac-b02a-daaaadf83aad 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=681cafa1-0731-46ac-b02a-daaaadf83aad 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:10.861 11:59:58 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.861 11:59:58 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.861 11:59:58 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.861 11:59:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.861 11:59:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.861 11:59:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.861 11:59:58 json_config -- paths/export.sh@5 -- # export PATH 00:06:10.861 11:59:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@47 -- # : 0 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:10.861 11:59:58 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:10.861 11:59:58 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:10.861 11:59:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:10.861 11:59:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:10.861 11:59:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:10.861 11:59:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:10.861 WARNING: No tests are enabled so not running JSON configuration tests 00:06:10.861 11:59:58 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:10.861 11:59:58 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:10.861 00:06:10.861 real 0m0.123s 00:06:10.861 user 0m0.064s 00:06:10.861 sys 0m0.058s 00:06:10.861 ************************************ 00:06:10.861 END TEST json_config 00:06:10.861 ************************************ 00:06:10.861 11:59:58 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.861 11:59:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.861 11:59:58 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:10.861 11:59:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.861 11:59:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.861 11:59:58 -- common/autotest_common.sh@10 -- # set +x 00:06:10.861 ************************************ 00:06:10.861 START TEST json_config_extra_key 00:06:10.861 ************************************ 00:06:10.861 11:59:58 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:11.121 11:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:681cafa1-0731-46ac-b02a-daaaadf83aad 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=681cafa1-0731-46ac-b02a-daaaadf83aad 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:11.121 11:59:58 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.121 11:59:58 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.121 11:59:58 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.121 11:59:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.121 11:59:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.121 11:59:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.121 11:59:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:11.121 11:59:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:11.121 11:59:58 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:11.121 11:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:11.121 11:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:11.121 11:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:11.121 11:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:11.121 11:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:11.121 11:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:11.121 11:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:11.121 11:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:11.121 11:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:11.121 INFO: launching applications... 00:06:11.121 11:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:11.121 11:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:11.121 11:59:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:11.121 11:59:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:11.121 11:59:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:11.121 11:59:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.121 11:59:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.121 11:59:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.121 11:59:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.121 11:59:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.121 Waiting for target to run... 00:06:11.121 11:59:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62755 00:06:11.121 11:59:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.121 11:59:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62755 /var/tmp/spdk_tgt.sock 00:06:11.121 11:59:58 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 62755 ']' 00:06:11.121 11:59:58 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.121 11:59:58 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.121 11:59:58 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.121 11:59:58 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.121 11:59:58 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:11.121 11:59:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.121 [2024-07-26 11:59:58.975622] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:06:11.121 [2024-07-26 11:59:58.975765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62755 ] 00:06:11.689 [2024-07-26 11:59:59.378742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.689 [2024-07-26 11:59:59.587613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.633 00:06:12.633 INFO: shutting down applications... 00:06:12.633 12:00:00 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.633 12:00:00 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:12.633 12:00:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:12.633 12:00:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:12.633 12:00:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:12.633 12:00:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:12.633 12:00:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:12.633 12:00:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62755 ]] 00:06:12.633 12:00:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62755 00:06:12.633 12:00:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:12.633 12:00:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.633 12:00:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62755 00:06:12.633 12:00:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:13.199 12:00:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:13.199 12:00:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.199 12:00:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62755 00:06:13.199 12:00:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:13.458 12:00:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:13.458 12:00:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.458 12:00:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62755 00:06:13.458 12:00:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:14.024 12:00:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:14.024 12:00:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.025 12:00:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62755 00:06:14.025 12:00:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:14.589 12:00:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:14.589 12:00:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.589 12:00:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62755 00:06:14.589 12:00:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.156 12:00:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.156 12:00:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.156 12:00:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62755 00:06:15.156 12:00:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.721 12:00:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.721 12:00:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.721 12:00:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62755 00:06:15.721 12:00:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:15.721 12:00:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:15.721 12:00:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:15.721 12:00:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:15.721 SPDK target shutdown done 00:06:15.721 12:00:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:15.721 Success 00:06:15.721 00:06:15.721 real 0m4.654s 00:06:15.721 user 0m4.225s 00:06:15.721 sys 0m0.567s 00:06:15.721 ************************************ 00:06:15.721 END TEST json_config_extra_key 00:06:15.721 ************************************ 00:06:15.721 12:00:03 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.721 12:00:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:15.721 12:00:03 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.721 12:00:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.721 12:00:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.721 12:00:03 -- common/autotest_common.sh@10 -- # set +x 00:06:15.721 ************************************ 00:06:15.721 START TEST alias_rpc 00:06:15.721 ************************************ 00:06:15.721 12:00:03 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.721 * Looking for test storage... 00:06:15.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:15.721 12:00:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:15.721 12:00:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62853 00:06:15.721 12:00:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.721 12:00:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62853 00:06:15.721 12:00:03 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 62853 ']' 00:06:15.721 12:00:03 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.721 12:00:03 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.721 12:00:03 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.721 12:00:03 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.721 12:00:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.979 [2024-07-26 12:00:03.732891] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:06:15.979 [2024-07-26 12:00:03.733389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62853 ] 00:06:15.979 [2024-07-26 12:00:03.903448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.237 [2024-07-26 12:00:04.133875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.170 12:00:05 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.170 12:00:05 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.170 12:00:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:17.427 12:00:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62853 00:06:17.427 12:00:05 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 62853 ']' 00:06:17.427 12:00:05 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 62853 00:06:17.427 12:00:05 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:17.427 12:00:05 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.427 12:00:05 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62853 00:06:17.427 killing process with pid 62853 00:06:17.427 12:00:05 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.427 12:00:05 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.427 12:00:05 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62853' 00:06:17.427 12:00:05 alias_rpc -- common/autotest_common.sh@969 -- # kill 62853 00:06:17.427 12:00:05 alias_rpc -- common/autotest_common.sh@974 -- # wait 62853 00:06:19.959 ************************************ 00:06:19.959 END TEST alias_rpc 00:06:19.959 ************************************ 00:06:19.959 00:06:19.959 real 0m4.348s 00:06:19.959 user 0m4.305s 00:06:19.959 sys 0m0.574s 00:06:19.959 12:00:07 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.959 12:00:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.959 12:00:07 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:19.959 12:00:07 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:19.959 12:00:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.959 12:00:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.959 12:00:07 -- common/autotest_common.sh@10 -- # set +x 00:06:19.959 ************************************ 00:06:19.959 START TEST spdkcli_tcp 00:06:19.959 ************************************ 00:06:19.959 12:00:07 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:20.218 * Looking for test storage... 00:06:20.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:20.218 12:00:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:20.218 12:00:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:20.218 12:00:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:20.218 12:00:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:20.218 12:00:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:20.218 12:00:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:20.218 12:00:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:20.218 12:00:08 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.218 12:00:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.218 12:00:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=62953 00:06:20.218 12:00:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:20.218 12:00:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 62953 00:06:20.218 12:00:08 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 62953 ']' 00:06:20.218 12:00:08 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.218 12:00:08 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.218 12:00:08 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.218 12:00:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.218 12:00:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.218 [2024-07-26 12:00:08.153186] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:06:20.218 [2024-07-26 12:00:08.153304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62953 ] 00:06:20.476 [2024-07-26 12:00:08.325987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.804 [2024-07-26 12:00:08.560960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.804 [2024-07-26 12:00:08.560995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.740 12:00:09 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.740 12:00:09 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:21.740 12:00:09 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=62980 00:06:21.740 12:00:09 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:21.740 12:00:09 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:21.740 [ 00:06:21.740 "bdev_malloc_delete", 00:06:21.740 "bdev_malloc_create", 00:06:21.740 "bdev_null_resize", 00:06:21.740 "bdev_null_delete", 00:06:21.740 "bdev_null_create", 00:06:21.740 "bdev_nvme_cuse_unregister", 00:06:21.740 "bdev_nvme_cuse_register", 00:06:21.740 "bdev_opal_new_user", 00:06:21.740 "bdev_opal_set_lock_state", 00:06:21.740 "bdev_opal_delete", 00:06:21.740 "bdev_opal_get_info", 00:06:21.740 "bdev_opal_create", 00:06:21.740 "bdev_nvme_opal_revert", 00:06:21.740 "bdev_nvme_opal_init", 00:06:21.740 "bdev_nvme_send_cmd", 00:06:21.740 "bdev_nvme_get_path_iostat", 00:06:21.740 "bdev_nvme_get_mdns_discovery_info", 00:06:21.740 "bdev_nvme_stop_mdns_discovery", 00:06:21.740 "bdev_nvme_start_mdns_discovery", 00:06:21.740 "bdev_nvme_set_multipath_policy", 00:06:21.740 "bdev_nvme_set_preferred_path", 00:06:21.740 "bdev_nvme_get_io_paths", 00:06:21.740 "bdev_nvme_remove_error_injection", 00:06:21.740 "bdev_nvme_add_error_injection", 00:06:21.740 "bdev_nvme_get_discovery_info", 00:06:21.740 "bdev_nvme_stop_discovery", 00:06:21.740 "bdev_nvme_start_discovery", 00:06:21.740 "bdev_nvme_get_controller_health_info", 00:06:21.740 "bdev_nvme_disable_controller", 00:06:21.740 "bdev_nvme_enable_controller", 00:06:21.740 "bdev_nvme_reset_controller", 00:06:21.740 "bdev_nvme_get_transport_statistics", 00:06:21.740 "bdev_nvme_apply_firmware", 00:06:21.740 "bdev_nvme_detach_controller", 00:06:21.740 "bdev_nvme_get_controllers", 00:06:21.740 "bdev_nvme_attach_controller", 00:06:21.740 "bdev_nvme_set_hotplug", 00:06:21.740 "bdev_nvme_set_options", 00:06:21.740 "bdev_passthru_delete", 00:06:21.740 "bdev_passthru_create", 00:06:21.740 "bdev_lvol_set_parent_bdev", 00:06:21.740 "bdev_lvol_set_parent", 00:06:21.740 "bdev_lvol_check_shallow_copy", 00:06:21.740 "bdev_lvol_start_shallow_copy", 00:06:21.740 "bdev_lvol_grow_lvstore", 00:06:21.740 "bdev_lvol_get_lvols", 00:06:21.740 "bdev_lvol_get_lvstores", 00:06:21.740 "bdev_lvol_delete", 00:06:21.740 "bdev_lvol_set_read_only", 00:06:21.740 "bdev_lvol_resize", 00:06:21.740 "bdev_lvol_decouple_parent", 00:06:21.740 "bdev_lvol_inflate", 00:06:21.740 "bdev_lvol_rename", 00:06:21.740 "bdev_lvol_clone_bdev", 00:06:21.740 "bdev_lvol_clone", 00:06:21.740 "bdev_lvol_snapshot", 00:06:21.740 "bdev_lvol_create", 00:06:21.740 "bdev_lvol_delete_lvstore", 00:06:21.740 "bdev_lvol_rename_lvstore", 00:06:21.740 "bdev_lvol_create_lvstore", 00:06:21.740 "bdev_raid_set_options", 00:06:21.740 "bdev_raid_remove_base_bdev", 00:06:21.740 "bdev_raid_add_base_bdev", 00:06:21.740 "bdev_raid_delete", 00:06:21.740 "bdev_raid_create", 00:06:21.740 "bdev_raid_get_bdevs", 00:06:21.740 "bdev_error_inject_error", 00:06:21.740 "bdev_error_delete", 00:06:21.740 "bdev_error_create", 00:06:21.740 "bdev_split_delete", 00:06:21.740 "bdev_split_create", 00:06:21.740 "bdev_delay_delete", 00:06:21.740 "bdev_delay_create", 00:06:21.740 "bdev_delay_update_latency", 00:06:21.740 "bdev_zone_block_delete", 00:06:21.740 "bdev_zone_block_create", 00:06:21.740 "blobfs_create", 00:06:21.740 "blobfs_detect", 00:06:21.740 "blobfs_set_cache_size", 00:06:21.740 "bdev_xnvme_delete", 00:06:21.740 "bdev_xnvme_create", 00:06:21.740 "bdev_aio_delete", 00:06:21.740 "bdev_aio_rescan", 00:06:21.740 "bdev_aio_create", 00:06:21.740 "bdev_ftl_set_property", 00:06:21.740 "bdev_ftl_get_properties", 00:06:21.740 "bdev_ftl_get_stats", 00:06:21.740 "bdev_ftl_unmap", 00:06:21.740 "bdev_ftl_unload", 00:06:21.740 "bdev_ftl_delete", 00:06:21.740 "bdev_ftl_load", 00:06:21.740 "bdev_ftl_create", 00:06:21.740 "bdev_virtio_attach_controller", 00:06:21.740 "bdev_virtio_scsi_get_devices", 00:06:21.740 "bdev_virtio_detach_controller", 00:06:21.740 "bdev_virtio_blk_set_hotplug", 00:06:21.740 "bdev_iscsi_delete", 00:06:21.740 "bdev_iscsi_create", 00:06:21.740 "bdev_iscsi_set_options", 00:06:21.740 "accel_error_inject_error", 00:06:21.740 "ioat_scan_accel_module", 00:06:21.740 "dsa_scan_accel_module", 00:06:21.740 "iaa_scan_accel_module", 00:06:21.740 "keyring_file_remove_key", 00:06:21.740 "keyring_file_add_key", 00:06:21.740 "keyring_linux_set_options", 00:06:21.740 "iscsi_get_histogram", 00:06:21.740 "iscsi_enable_histogram", 00:06:21.740 "iscsi_set_options", 00:06:21.740 "iscsi_get_auth_groups", 00:06:21.740 "iscsi_auth_group_remove_secret", 00:06:21.740 "iscsi_auth_group_add_secret", 00:06:21.740 "iscsi_delete_auth_group", 00:06:21.740 "iscsi_create_auth_group", 00:06:21.740 "iscsi_set_discovery_auth", 00:06:21.740 "iscsi_get_options", 00:06:21.740 "iscsi_target_node_request_logout", 00:06:21.740 "iscsi_target_node_set_redirect", 00:06:21.740 "iscsi_target_node_set_auth", 00:06:21.740 "iscsi_target_node_add_lun", 00:06:21.740 "iscsi_get_stats", 00:06:21.740 "iscsi_get_connections", 00:06:21.740 "iscsi_portal_group_set_auth", 00:06:21.740 "iscsi_start_portal_group", 00:06:21.740 "iscsi_delete_portal_group", 00:06:21.740 "iscsi_create_portal_group", 00:06:21.740 "iscsi_get_portal_groups", 00:06:21.740 "iscsi_delete_target_node", 00:06:21.740 "iscsi_target_node_remove_pg_ig_maps", 00:06:21.740 "iscsi_target_node_add_pg_ig_maps", 00:06:21.740 "iscsi_create_target_node", 00:06:21.740 "iscsi_get_target_nodes", 00:06:21.740 "iscsi_delete_initiator_group", 00:06:21.740 "iscsi_initiator_group_remove_initiators", 00:06:21.740 "iscsi_initiator_group_add_initiators", 00:06:21.740 "iscsi_create_initiator_group", 00:06:21.740 "iscsi_get_initiator_groups", 00:06:21.740 "nvmf_set_crdt", 00:06:21.740 "nvmf_set_config", 00:06:21.740 "nvmf_set_max_subsystems", 00:06:21.740 "nvmf_stop_mdns_prr", 00:06:21.740 "nvmf_publish_mdns_prr", 00:06:21.740 "nvmf_subsystem_get_listeners", 00:06:21.740 "nvmf_subsystem_get_qpairs", 00:06:21.740 "nvmf_subsystem_get_controllers", 00:06:21.740 "nvmf_get_stats", 00:06:21.740 "nvmf_get_transports", 00:06:21.740 "nvmf_create_transport", 00:06:21.740 "nvmf_get_targets", 00:06:21.740 "nvmf_delete_target", 00:06:21.740 "nvmf_create_target", 00:06:21.740 "nvmf_subsystem_allow_any_host", 00:06:21.740 "nvmf_subsystem_remove_host", 00:06:21.740 "nvmf_subsystem_add_host", 00:06:21.740 "nvmf_ns_remove_host", 00:06:21.740 "nvmf_ns_add_host", 00:06:21.741 "nvmf_subsystem_remove_ns", 00:06:21.741 "nvmf_subsystem_add_ns", 00:06:21.741 "nvmf_subsystem_listener_set_ana_state", 00:06:21.741 "nvmf_discovery_get_referrals", 00:06:21.741 "nvmf_discovery_remove_referral", 00:06:21.741 "nvmf_discovery_add_referral", 00:06:21.741 "nvmf_subsystem_remove_listener", 00:06:21.741 "nvmf_subsystem_add_listener", 00:06:21.741 "nvmf_delete_subsystem", 00:06:21.741 "nvmf_create_subsystem", 00:06:21.741 "nvmf_get_subsystems", 00:06:21.741 "env_dpdk_get_mem_stats", 00:06:21.741 "nbd_get_disks", 00:06:21.741 "nbd_stop_disk", 00:06:21.741 "nbd_start_disk", 00:06:21.741 "ublk_recover_disk", 00:06:21.741 "ublk_get_disks", 00:06:21.741 "ublk_stop_disk", 00:06:21.741 "ublk_start_disk", 00:06:21.741 "ublk_destroy_target", 00:06:21.741 "ublk_create_target", 00:06:21.741 "virtio_blk_create_transport", 00:06:21.741 "virtio_blk_get_transports", 00:06:21.741 "vhost_controller_set_coalescing", 00:06:21.741 "vhost_get_controllers", 00:06:21.741 "vhost_delete_controller", 00:06:21.741 "vhost_create_blk_controller", 00:06:21.741 "vhost_scsi_controller_remove_target", 00:06:21.741 "vhost_scsi_controller_add_target", 00:06:21.741 "vhost_start_scsi_controller", 00:06:21.741 "vhost_create_scsi_controller", 00:06:21.741 "thread_set_cpumask", 00:06:21.741 "framework_get_governor", 00:06:21.741 "framework_get_scheduler", 00:06:21.741 "framework_set_scheduler", 00:06:21.741 "framework_get_reactors", 00:06:21.741 "thread_get_io_channels", 00:06:21.741 "thread_get_pollers", 00:06:21.741 "thread_get_stats", 00:06:21.741 "framework_monitor_context_switch", 00:06:21.741 "spdk_kill_instance", 00:06:21.741 "log_enable_timestamps", 00:06:21.741 "log_get_flags", 00:06:21.741 "log_clear_flag", 00:06:21.741 "log_set_flag", 00:06:21.741 "log_get_level", 00:06:21.741 "log_set_level", 00:06:21.741 "log_get_print_level", 00:06:21.741 "log_set_print_level", 00:06:21.741 "framework_enable_cpumask_locks", 00:06:21.741 "framework_disable_cpumask_locks", 00:06:21.741 "framework_wait_init", 00:06:21.741 "framework_start_init", 00:06:21.741 "scsi_get_devices", 00:06:21.741 "bdev_get_histogram", 00:06:21.741 "bdev_enable_histogram", 00:06:21.741 "bdev_set_qos_limit", 00:06:21.741 "bdev_set_qd_sampling_period", 00:06:21.741 "bdev_get_bdevs", 00:06:21.741 "bdev_reset_iostat", 00:06:21.741 "bdev_get_iostat", 00:06:21.741 "bdev_examine", 00:06:21.741 "bdev_wait_for_examine", 00:06:21.741 "bdev_set_options", 00:06:21.741 "notify_get_notifications", 00:06:21.741 "notify_get_types", 00:06:21.741 "accel_get_stats", 00:06:21.741 "accel_set_options", 00:06:21.741 "accel_set_driver", 00:06:21.741 "accel_crypto_key_destroy", 00:06:21.741 "accel_crypto_keys_get", 00:06:21.741 "accel_crypto_key_create", 00:06:21.741 "accel_assign_opc", 00:06:21.741 "accel_get_module_info", 00:06:21.741 "accel_get_opc_assignments", 00:06:21.741 "vmd_rescan", 00:06:21.741 "vmd_remove_device", 00:06:21.741 "vmd_enable", 00:06:21.741 "sock_get_default_impl", 00:06:21.741 "sock_set_default_impl", 00:06:21.741 "sock_impl_set_options", 00:06:21.741 "sock_impl_get_options", 00:06:21.741 "iobuf_get_stats", 00:06:21.741 "iobuf_set_options", 00:06:21.741 "framework_get_pci_devices", 00:06:21.741 "framework_get_config", 00:06:21.741 "framework_get_subsystems", 00:06:21.741 "trace_get_info", 00:06:21.741 "trace_get_tpoint_group_mask", 00:06:21.741 "trace_disable_tpoint_group", 00:06:21.741 "trace_enable_tpoint_group", 00:06:21.741 "trace_clear_tpoint_mask", 00:06:21.741 "trace_set_tpoint_mask", 00:06:21.741 "keyring_get_keys", 00:06:21.741 "spdk_get_version", 00:06:21.741 "rpc_get_methods" 00:06:21.741 ] 00:06:21.741 12:00:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:21.741 12:00:09 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:21.741 12:00:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.999 12:00:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:21.999 12:00:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 62953 00:06:21.999 12:00:09 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 62953 ']' 00:06:21.999 12:00:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 62953 00:06:21.999 12:00:09 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:21.999 12:00:09 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.999 12:00:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62953 00:06:21.999 killing process with pid 62953 00:06:21.999 12:00:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.999 12:00:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.999 12:00:09 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62953' 00:06:21.999 12:00:09 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 62953 00:06:21.999 12:00:09 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 62953 00:06:24.532 ************************************ 00:06:24.532 END TEST spdkcli_tcp 00:06:24.532 ************************************ 00:06:24.532 00:06:24.532 real 0m4.371s 00:06:24.532 user 0m7.546s 00:06:24.532 sys 0m0.618s 00:06:24.532 12:00:12 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.532 12:00:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.532 12:00:12 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:24.532 12:00:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.532 12:00:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.532 12:00:12 -- common/autotest_common.sh@10 -- # set +x 00:06:24.532 ************************************ 00:06:24.532 START TEST dpdk_mem_utility 00:06:24.532 ************************************ 00:06:24.532 12:00:12 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:24.532 * Looking for test storage... 00:06:24.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:24.532 12:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:24.532 12:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63077 00:06:24.532 12:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.532 12:00:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63077 00:06:24.532 12:00:12 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 63077 ']' 00:06:24.532 12:00:12 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.532 12:00:12 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.532 12:00:12 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.532 12:00:12 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.532 12:00:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:24.791 [2024-07-26 12:00:12.599301] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:06:24.791 [2024-07-26 12:00:12.599735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63077 ] 00:06:25.049 [2024-07-26 12:00:12.775607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.049 [2024-07-26 12:00:13.008653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.985 12:00:13 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.985 12:00:13 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:25.985 12:00:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:25.985 12:00:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:25.985 12:00:13 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.985 12:00:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.985 { 00:06:25.985 "filename": "/tmp/spdk_mem_dump.txt" 00:06:25.985 } 00:06:25.985 12:00:13 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.985 12:00:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:26.244 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:26.244 1 heaps totaling size 820.000000 MiB 00:06:26.244 size: 820.000000 MiB heap id: 0 00:06:26.244 end heaps---------- 00:06:26.244 8 mempools totaling size 598.116089 MiB 00:06:26.244 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:26.244 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:26.244 size: 84.521057 MiB name: bdev_io_63077 00:06:26.244 size: 51.011292 MiB name: evtpool_63077 00:06:26.244 size: 50.003479 MiB name: msgpool_63077 00:06:26.244 size: 21.763794 MiB name: PDU_Pool 00:06:26.244 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:26.244 size: 0.026123 MiB name: Session_Pool 00:06:26.244 end mempools------- 00:06:26.244 6 memzones totaling size 4.142822 MiB 00:06:26.244 size: 1.000366 MiB name: RG_ring_0_63077 00:06:26.244 size: 1.000366 MiB name: RG_ring_1_63077 00:06:26.244 size: 1.000366 MiB name: RG_ring_4_63077 00:06:26.244 size: 1.000366 MiB name: RG_ring_5_63077 00:06:26.244 size: 0.125366 MiB name: RG_ring_2_63077 00:06:26.244 size: 0.015991 MiB name: RG_ring_3_63077 00:06:26.244 end memzones------- 00:06:26.244 12:00:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:26.244 heap id: 0 total size: 820.000000 MiB number of busy elements: 300 number of free elements: 18 00:06:26.244 list of free elements. size: 18.451538 MiB 00:06:26.244 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:26.244 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:26.244 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:26.244 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:26.244 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:26.244 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:26.244 element at address: 0x200019600000 with size: 0.999084 MiB 00:06:26.244 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:26.244 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:26.244 element at address: 0x200018e00000 with size: 0.959656 MiB 00:06:26.244 element at address: 0x200019900040 with size: 0.936401 MiB 00:06:26.244 element at address: 0x200000200000 with size: 0.829956 MiB 00:06:26.244 element at address: 0x20001b000000 with size: 0.564148 MiB 00:06:26.244 element at address: 0x200019200000 with size: 0.487976 MiB 00:06:26.244 element at address: 0x200019a00000 with size: 0.485413 MiB 00:06:26.244 element at address: 0x200013800000 with size: 0.467896 MiB 00:06:26.244 element at address: 0x200028400000 with size: 0.390442 MiB 00:06:26.244 element at address: 0x200003a00000 with size: 0.351990 MiB 00:06:26.244 list of standard malloc elements. size: 199.284058 MiB 00:06:26.244 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:26.244 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:26.244 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:26.244 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:26.244 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:26.244 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:26.244 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:26.244 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:26.244 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:06:26.244 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:06:26.244 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:06:26.244 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:06:26.244 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:06:26.244 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:06:26.244 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:06:26.244 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:06:26.244 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:06:26.244 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:06:26.244 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:06:26.244 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:06:26.244 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:06:26.244 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:06:26.244 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:06:26.244 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:06:26.244 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200013877c80 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200013877d80 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200013877e80 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200013877f80 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200013878080 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200013878180 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200013878280 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200013878380 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200013878480 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200013878580 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x200019abc680 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:06:26.245 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:06:26.246 element at address: 0x200028463f40 with size: 0.000244 MiB 00:06:26.246 element at address: 0x200028464040 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846af80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846b080 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846b180 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846b280 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846b380 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846b480 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846b580 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846b680 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846b780 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846b880 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846b980 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846be80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846c080 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846c180 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846c280 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846c380 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846c480 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846c580 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846c680 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846c780 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846c880 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846c980 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846d080 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846d180 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846d280 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846d380 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846d480 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846d580 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846d680 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846d780 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846d880 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846d980 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846da80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846db80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846de80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846df80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846e080 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846e180 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846e280 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846e380 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846e480 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846e580 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846e680 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846e780 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846e880 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846e980 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846f080 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846f180 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846f280 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846f380 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846f480 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846f580 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846f680 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846f780 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846f880 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846f980 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:06:26.246 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:06:26.246 list of memzone associated elements. size: 602.264404 MiB 00:06:26.246 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:26.246 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:26.246 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:26.246 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:26.246 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:26.246 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63077_0 00:06:26.246 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:26.246 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63077_0 00:06:26.246 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:26.246 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63077_0 00:06:26.246 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:26.246 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:26.246 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:26.246 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:26.246 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:26.246 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63077 00:06:26.246 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:26.246 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63077 00:06:26.246 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:26.246 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63077 00:06:26.247 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:26.247 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:26.247 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:26.247 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:26.247 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:26.247 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:26.247 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:26.247 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:26.247 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:26.247 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63077 00:06:26.247 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:26.247 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63077 00:06:26.247 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:26.247 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63077 00:06:26.247 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:26.247 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63077 00:06:26.247 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:26.247 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63077 00:06:26.247 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:06:26.247 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:26.247 element at address: 0x200013878680 with size: 0.500549 MiB 00:06:26.247 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:26.247 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:06:26.247 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:26.247 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:26.247 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63077 00:06:26.247 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:06:26.247 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:26.247 element at address: 0x200028464140 with size: 0.023804 MiB 00:06:26.247 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:26.247 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:26.247 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63077 00:06:26.247 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:06:26.247 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:26.247 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:06:26.247 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63077 00:06:26.247 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:26.247 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63077 00:06:26.247 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:06:26.247 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:26.247 12:00:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:26.247 12:00:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63077 00:06:26.247 12:00:14 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 63077 ']' 00:06:26.247 12:00:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 63077 00:06:26.247 12:00:14 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:26.247 12:00:14 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.247 12:00:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63077 00:06:26.247 killing process with pid 63077 00:06:26.247 12:00:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.247 12:00:14 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.247 12:00:14 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63077' 00:06:26.247 12:00:14 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 63077 00:06:26.247 12:00:14 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 63077 00:06:28.785 00:06:28.785 real 0m4.221s 00:06:28.785 user 0m4.151s 00:06:28.785 sys 0m0.564s 00:06:28.785 12:00:16 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.785 ************************************ 00:06:28.785 END TEST dpdk_mem_utility 00:06:28.785 ************************************ 00:06:28.785 12:00:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:28.785 12:00:16 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:28.785 12:00:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.785 12:00:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.785 12:00:16 -- common/autotest_common.sh@10 -- # set +x 00:06:28.785 ************************************ 00:06:28.785 START TEST event 00:06:28.785 ************************************ 00:06:28.785 12:00:16 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:28.785 * Looking for test storage... 00:06:28.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:28.785 12:00:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:28.785 12:00:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:28.785 12:00:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:28.785 12:00:16 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:28.785 12:00:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.785 12:00:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.044 ************************************ 00:06:29.044 START TEST event_perf 00:06:29.044 ************************************ 00:06:29.044 12:00:16 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:29.044 Running I/O for 1 seconds...[2024-07-26 12:00:16.822751] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:06:29.044 [2024-07-26 12:00:16.822887] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63177 ] 00:06:29.044 [2024-07-26 12:00:16.984283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.304 [2024-07-26 12:00:17.226496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.304 [2024-07-26 12:00:17.226617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.304 [2024-07-26 12:00:17.226644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.304 Running I/O for 1 seconds...[2024-07-26 12:00:17.228564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.684 00:06:30.684 lcore 0: 200289 00:06:30.684 lcore 1: 200290 00:06:30.684 lcore 2: 200289 00:06:30.684 lcore 3: 200290 00:06:30.684 done. 00:06:30.943 ************************************ 00:06:30.943 END TEST event_perf 00:06:30.943 ************************************ 00:06:30.943 00:06:30.943 real 0m1.905s 00:06:30.943 user 0m4.637s 00:06:30.943 sys 0m0.141s 00:06:30.943 12:00:18 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.943 12:00:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.943 12:00:18 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:30.943 12:00:18 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:30.943 12:00:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.943 12:00:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.943 ************************************ 00:06:30.943 START TEST event_reactor 00:06:30.943 ************************************ 00:06:30.943 12:00:18 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:30.943 [2024-07-26 12:00:18.798092] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:06:30.943 [2024-07-26 12:00:18.798465] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63222 ] 00:06:31.215 [2024-07-26 12:00:18.986382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.503 [2024-07-26 12:00:19.214577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.882 test_start 00:06:32.882 oneshot 00:06:32.882 tick 100 00:06:32.882 tick 100 00:06:32.882 tick 250 00:06:32.882 tick 100 00:06:32.882 tick 100 00:06:32.882 tick 100 00:06:32.882 tick 250 00:06:32.882 tick 500 00:06:32.882 tick 100 00:06:32.882 tick 100 00:06:32.882 tick 250 00:06:32.882 tick 100 00:06:32.882 tick 100 00:06:32.882 test_end 00:06:32.882 00:06:32.882 real 0m1.906s 00:06:32.882 user 0m1.675s 00:06:32.882 sys 0m0.120s 00:06:32.882 12:00:20 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.882 ************************************ 00:06:32.882 END TEST event_reactor 00:06:32.882 ************************************ 00:06:32.882 12:00:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:32.882 12:00:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.882 12:00:20 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:32.882 12:00:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.882 12:00:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.882 ************************************ 00:06:32.882 START TEST event_reactor_perf 00:06:32.882 ************************************ 00:06:32.882 12:00:20 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.882 [2024-07-26 12:00:20.779338] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:06:32.882 [2024-07-26 12:00:20.779455] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63261 ] 00:06:33.141 [2024-07-26 12:00:20.949459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.400 [2024-07-26 12:00:21.183737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.777 test_start 00:06:34.777 test_end 00:06:34.777 Performance: 374703 events per second 00:06:34.777 00:06:34.777 real 0m1.874s 00:06:34.777 user 0m1.640s 00:06:34.777 sys 0m0.124s 00:06:34.777 ************************************ 00:06:34.777 END TEST event_reactor_perf 00:06:34.777 ************************************ 00:06:34.777 12:00:22 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.777 12:00:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.777 12:00:22 event -- event/event.sh@49 -- # uname -s 00:06:34.777 12:00:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:34.777 12:00:22 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:34.777 12:00:22 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.777 12:00:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.777 12:00:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.777 ************************************ 00:06:34.777 START TEST event_scheduler 00:06:34.777 ************************************ 00:06:34.777 12:00:22 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:35.036 * Looking for test storage... 00:06:35.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:35.036 12:00:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:35.036 12:00:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63329 00:06:35.036 12:00:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.036 12:00:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:35.036 12:00:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63329 00:06:35.036 12:00:22 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 63329 ']' 00:06:35.036 12:00:22 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.036 12:00:22 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.036 12:00:22 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.036 12:00:22 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.036 12:00:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.036 [2024-07-26 12:00:22.900209] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:06:35.036 [2024-07-26 12:00:22.900341] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63329 ] 00:06:35.295 [2024-07-26 12:00:23.073279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.552 [2024-07-26 12:00:23.304675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.552 [2024-07-26 12:00:23.304861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.552 [2024-07-26 12:00:23.305018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.552 [2024-07-26 12:00:23.305043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.809 12:00:23 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.809 12:00:23 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:35.809 12:00:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:35.809 12:00:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.809 12:00:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:35.809 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:35.809 POWER: Cannot set governor of lcore 0 to userspace 00:06:35.809 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:35.809 POWER: Cannot set governor of lcore 0 to performance 00:06:35.809 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:35.809 POWER: Cannot set governor of lcore 0 to userspace 00:06:35.809 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:35.809 POWER: Cannot set governor of lcore 0 to userspace 00:06:35.809 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:35.809 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:35.809 POWER: Unable to set Power Management Environment for lcore 0 00:06:35.809 [2024-07-26 12:00:23.704020] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:35.809 [2024-07-26 12:00:23.704138] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:35.809 [2024-07-26 12:00:23.704317] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:35.809 [2024-07-26 12:00:23.704429] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:35.809 [2024-07-26 12:00:23.704523] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:35.809 [2024-07-26 12:00:23.704623] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:35.809 12:00:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.809 12:00:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:35.809 12:00:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.809 12:00:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:36.376 [2024-07-26 12:00:24.075095] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:36.376 12:00:24 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.376 12:00:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:36.376 12:00:24 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.376 12:00:24 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.376 12:00:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:36.376 ************************************ 00:06:36.376 START TEST scheduler_create_thread 00:06:36.376 ************************************ 00:06:36.376 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:36.376 12:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:36.376 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.376 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.376 2 00:06:36.376 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.376 12:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:36.376 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.376 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.376 3 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.377 4 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.377 5 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.377 6 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.377 7 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.377 8 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.377 9 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.377 10 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.377 12:00:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.311 ************************************ 00:06:37.311 END TEST scheduler_create_thread 00:06:37.311 ************************************ 00:06:37.311 12:00:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.311 00:06:37.311 real 0m1.180s 00:06:37.311 user 0m0.014s 00:06:37.311 sys 0m0.010s 00:06:37.311 12:00:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.311 12:00:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.569 12:00:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:37.569 12:00:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63329 00:06:37.569 12:00:25 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 63329 ']' 00:06:37.569 12:00:25 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 63329 00:06:37.569 12:00:25 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:37.569 12:00:25 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.569 12:00:25 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63329 00:06:37.569 killing process with pid 63329 00:06:37.569 12:00:25 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:37.569 12:00:25 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:37.569 12:00:25 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63329' 00:06:37.569 12:00:25 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 63329 00:06:37.569 12:00:25 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 63329 00:06:37.828 [2024-07-26 12:00:25.748548] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:39.246 00:06:39.246 real 0m4.408s 00:06:39.246 user 0m6.906s 00:06:39.246 sys 0m0.504s 00:06:39.246 12:00:27 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.246 12:00:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.246 ************************************ 00:06:39.246 END TEST event_scheduler 00:06:39.246 ************************************ 00:06:39.246 12:00:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:39.246 12:00:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:39.246 12:00:27 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.246 12:00:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.246 12:00:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.246 ************************************ 00:06:39.246 START TEST app_repeat 00:06:39.246 ************************************ 00:06:39.246 12:00:27 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:39.246 12:00:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.246 12:00:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.246 12:00:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:39.246 12:00:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.246 12:00:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:39.246 12:00:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:39.246 12:00:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:39.246 12:00:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63426 00:06:39.246 12:00:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.246 12:00:27 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:39.246 12:00:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63426' 00:06:39.246 Process app_repeat pid: 63426 00:06:39.246 12:00:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:39.246 spdk_app_start Round 0 00:06:39.246 12:00:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:39.246 12:00:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63426 /var/tmp/spdk-nbd.sock 00:06:39.246 12:00:27 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63426 ']' 00:06:39.246 12:00:27 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.246 12:00:27 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.246 12:00:27 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.246 12:00:27 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.246 12:00:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.503 [2024-07-26 12:00:27.236068] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:06:39.503 [2024-07-26 12:00:27.236204] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63426 ] 00:06:39.503 [2024-07-26 12:00:27.410336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.761 [2024-07-26 12:00:27.644044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.761 [2024-07-26 12:00:27.644070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.327 12:00:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.327 12:00:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:40.327 12:00:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.327 Malloc0 00:06:40.585 12:00:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.843 Malloc1 00:06:40.843 12:00:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.843 12:00:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.843 12:00:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.843 12:00:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.843 12:00:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.843 12:00:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.843 12:00:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.843 12:00:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.843 12:00:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.843 12:00:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.843 12:00:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.843 12:00:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.843 12:00:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:40.843 12:00:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.843 12:00:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.843 12:00:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.843 /dev/nbd0 00:06:40.844 12:00:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.102 12:00:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.102 12:00:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:41.102 12:00:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:41.102 12:00:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:41.102 12:00:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:41.102 12:00:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:41.102 12:00:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:41.102 12:00:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:41.102 12:00:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:41.102 12:00:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.102 1+0 records in 00:06:41.102 1+0 records out 00:06:41.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349279 s, 11.7 MB/s 00:06:41.102 12:00:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.102 12:00:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:41.102 12:00:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.102 12:00:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:41.102 12:00:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:41.102 12:00:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.102 12:00:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.102 12:00:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.102 /dev/nbd1 00:06:41.102 12:00:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.102 12:00:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.102 12:00:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:41.102 12:00:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:41.102 12:00:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:41.102 12:00:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:41.102 12:00:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:41.102 12:00:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:41.102 12:00:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:41.102 12:00:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:41.102 12:00:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.102 1+0 records in 00:06:41.102 1+0 records out 00:06:41.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420681 s, 9.7 MB/s 00:06:41.102 12:00:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.102 12:00:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:41.102 12:00:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.102 12:00:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:41.102 12:00:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:41.102 12:00:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.102 12:00:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.360 12:00:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.360 12:00:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.360 12:00:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.360 12:00:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.360 { 00:06:41.360 "nbd_device": "/dev/nbd0", 00:06:41.360 "bdev_name": "Malloc0" 00:06:41.360 }, 00:06:41.360 { 00:06:41.360 "nbd_device": "/dev/nbd1", 00:06:41.360 "bdev_name": "Malloc1" 00:06:41.360 } 00:06:41.360 ]' 00:06:41.360 12:00:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.360 12:00:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.360 { 00:06:41.360 "nbd_device": "/dev/nbd0", 00:06:41.360 "bdev_name": "Malloc0" 00:06:41.360 }, 00:06:41.360 { 00:06:41.360 "nbd_device": "/dev/nbd1", 00:06:41.360 "bdev_name": "Malloc1" 00:06:41.360 } 00:06:41.360 ]' 00:06:41.360 12:00:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.360 /dev/nbd1' 00:06:41.360 12:00:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.360 /dev/nbd1' 00:06:41.360 12:00:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.360 12:00:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.360 12:00:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.621 256+0 records in 00:06:41.621 256+0 records out 00:06:41.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126803 s, 82.7 MB/s 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.621 256+0 records in 00:06:41.621 256+0 records out 00:06:41.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280066 s, 37.4 MB/s 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.621 256+0 records in 00:06:41.621 256+0 records out 00:06:41.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0323664 s, 32.4 MB/s 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.621 12:00:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.887 12:00:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.146 12:00:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.146 12:00:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.146 12:00:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.146 12:00:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.146 12:00:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.146 12:00:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.146 12:00:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.146 12:00:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.146 12:00:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.146 12:00:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.146 12:00:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.146 12:00:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.146 12:00:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.713 12:00:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:44.091 [2024-07-26 12:00:31.872374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.349 [2024-07-26 12:00:32.099703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.349 [2024-07-26 12:00:32.099704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.607 [2024-07-26 12:00:32.330305] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:44.607 [2024-07-26 12:00:32.330767] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.552 spdk_app_start Round 1 00:06:45.552 12:00:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.552 12:00:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:45.552 12:00:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63426 /var/tmp/spdk-nbd.sock 00:06:45.552 12:00:33 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63426 ']' 00:06:45.552 12:00:33 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.552 12:00:33 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.552 12:00:33 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.552 12:00:33 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.552 12:00:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.810 12:00:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.810 12:00:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:45.810 12:00:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.069 Malloc0 00:06:46.069 12:00:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.328 Malloc1 00:06:46.328 12:00:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.328 12:00:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.328 12:00:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.328 12:00:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.328 12:00:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.328 12:00:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.328 12:00:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.328 12:00:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.328 12:00:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.328 12:00:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.328 12:00:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.328 12:00:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.328 12:00:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.328 12:00:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.328 12:00:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.328 12:00:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.599 /dev/nbd0 00:06:46.599 12:00:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.599 12:00:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.599 12:00:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:46.599 12:00:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:46.599 12:00:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.599 12:00:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.599 12:00:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:46.599 12:00:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:46.599 12:00:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.599 12:00:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.599 12:00:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.599 1+0 records in 00:06:46.599 1+0 records out 00:06:46.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028938 s, 14.2 MB/s 00:06:46.599 12:00:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.599 12:00:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:46.599 12:00:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.599 12:00:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.599 12:00:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:46.599 12:00:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.599 12:00:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.599 12:00:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.860 /dev/nbd1 00:06:46.860 12:00:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.860 12:00:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.860 12:00:34 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:46.860 12:00:34 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:46.860 12:00:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.860 12:00:34 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.860 12:00:34 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:46.860 12:00:34 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:46.860 12:00:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.860 12:00:34 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.860 12:00:34 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.860 1+0 records in 00:06:46.860 1+0 records out 00:06:46.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411843 s, 9.9 MB/s 00:06:46.860 12:00:34 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.860 12:00:34 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:46.860 12:00:34 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.860 12:00:34 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.860 12:00:34 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:46.860 12:00:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.860 12:00:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.860 12:00:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.860 12:00:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.860 12:00:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.119 12:00:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.119 { 00:06:47.119 "nbd_device": "/dev/nbd0", 00:06:47.120 "bdev_name": "Malloc0" 00:06:47.120 }, 00:06:47.120 { 00:06:47.120 "nbd_device": "/dev/nbd1", 00:06:47.120 "bdev_name": "Malloc1" 00:06:47.120 } 00:06:47.120 ]' 00:06:47.120 12:00:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.120 { 00:06:47.120 "nbd_device": "/dev/nbd0", 00:06:47.120 "bdev_name": "Malloc0" 00:06:47.120 }, 00:06:47.120 { 00:06:47.120 "nbd_device": "/dev/nbd1", 00:06:47.120 "bdev_name": "Malloc1" 00:06:47.120 } 00:06:47.120 ]' 00:06:47.120 12:00:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.120 12:00:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:47.120 /dev/nbd1' 00:06:47.120 12:00:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:47.120 /dev/nbd1' 00:06:47.120 12:00:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.120 12:00:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:47.120 12:00:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:47.120 256+0 records in 00:06:47.120 256+0 records out 00:06:47.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00593533 s, 177 MB/s 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.120 256+0 records in 00:06:47.120 256+0 records out 00:06:47.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279792 s, 37.5 MB/s 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.120 256+0 records in 00:06:47.120 256+0 records out 00:06:47.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296102 s, 35.4 MB/s 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.120 12:00:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.379 12:00:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.638 12:00:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.638 12:00:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.638 12:00:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.638 12:00:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.638 12:00:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.638 12:00:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.638 12:00:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.638 12:00:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.638 12:00:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.638 12:00:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.638 12:00:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.897 12:00:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.897 12:00:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.897 12:00:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.897 12:00:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.897 12:00:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.897 12:00:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.897 12:00:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.897 12:00:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.897 12:00:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.897 12:00:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.897 12:00:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.897 12:00:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.897 12:00:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.465 12:00:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:49.842 [2024-07-26 12:00:37.542149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.842 [2024-07-26 12:00:37.765088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.842 [2024-07-26 12:00:37.765109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.101 [2024-07-26 12:00:37.995405] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:50.101 [2024-07-26 12:00:37.995493] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.519 spdk_app_start Round 2 00:06:51.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.519 12:00:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:51.519 12:00:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:51.519 12:00:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63426 /var/tmp/spdk-nbd.sock 00:06:51.519 12:00:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63426 ']' 00:06:51.519 12:00:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.519 12:00:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.519 12:00:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.519 12:00:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.519 12:00:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.519 12:00:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.519 12:00:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:51.519 12:00:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.776 Malloc0 00:06:51.776 12:00:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.036 Malloc1 00:06:52.036 12:00:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.036 12:00:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.036 12:00:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.036 12:00:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:52.036 12:00:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.036 12:00:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:52.036 12:00:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.036 12:00:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.036 12:00:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.036 12:00:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:52.036 12:00:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.036 12:00:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:52.036 12:00:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:52.036 12:00:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:52.036 12:00:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.036 12:00:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:52.295 /dev/nbd0 00:06:52.295 12:00:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.295 12:00:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.296 12:00:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:52.296 12:00:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:52.296 12:00:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.296 12:00:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.296 12:00:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:52.296 12:00:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:52.296 12:00:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.296 12:00:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.296 12:00:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.296 1+0 records in 00:06:52.296 1+0 records out 00:06:52.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299865 s, 13.7 MB/s 00:06:52.296 12:00:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.296 12:00:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:52.296 12:00:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.296 12:00:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.296 12:00:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:52.296 12:00:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.296 12:00:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.296 12:00:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.555 /dev/nbd1 00:06:52.555 12:00:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.555 12:00:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.555 12:00:40 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:52.555 12:00:40 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:52.555 12:00:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.555 12:00:40 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.555 12:00:40 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:52.555 12:00:40 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:52.555 12:00:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.555 12:00:40 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.555 12:00:40 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.555 1+0 records in 00:06:52.555 1+0 records out 00:06:52.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620507 s, 6.6 MB/s 00:06:52.555 12:00:40 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.555 12:00:40 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:52.555 12:00:40 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.555 12:00:40 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.555 12:00:40 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:52.555 12:00:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.555 12:00:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.555 12:00:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.555 12:00:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.555 12:00:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.813 12:00:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.813 { 00:06:52.813 "nbd_device": "/dev/nbd0", 00:06:52.813 "bdev_name": "Malloc0" 00:06:52.813 }, 00:06:52.813 { 00:06:52.813 "nbd_device": "/dev/nbd1", 00:06:52.813 "bdev_name": "Malloc1" 00:06:52.813 } 00:06:52.813 ]' 00:06:52.813 12:00:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.813 { 00:06:52.813 "nbd_device": "/dev/nbd0", 00:06:52.813 "bdev_name": "Malloc0" 00:06:52.813 }, 00:06:52.813 { 00:06:52.813 "nbd_device": "/dev/nbd1", 00:06:52.813 "bdev_name": "Malloc1" 00:06:52.813 } 00:06:52.813 ]' 00:06:52.813 12:00:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.813 12:00:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.814 /dev/nbd1' 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.814 /dev/nbd1' 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.814 256+0 records in 00:06:52.814 256+0 records out 00:06:52.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119781 s, 87.5 MB/s 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.814 256+0 records in 00:06:52.814 256+0 records out 00:06:52.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264258 s, 39.7 MB/s 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.814 256+0 records in 00:06:52.814 256+0 records out 00:06:52.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292761 s, 35.8 MB/s 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.814 12:00:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.072 12:00:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.072 12:00:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.072 12:00:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.072 12:00:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.072 12:00:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.072 12:00:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.072 12:00:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.072 12:00:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.072 12:00:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.072 12:00:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.332 12:00:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.332 12:00:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.332 12:00:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.332 12:00:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.332 12:00:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.332 12:00:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.332 12:00:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.332 12:00:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.332 12:00:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.332 12:00:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.332 12:00:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.591 12:00:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.591 12:00:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.591 12:00:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.591 12:00:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.591 12:00:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.591 12:00:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.591 12:00:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.591 12:00:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.591 12:00:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.591 12:00:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.591 12:00:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.591 12:00:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.591 12:00:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:53.854 12:00:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.230 [2024-07-26 12:00:43.202542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.489 [2024-07-26 12:00:43.431444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.489 [2024-07-26 12:00:43.431444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.747 [2024-07-26 12:00:43.654348] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.747 [2024-07-26 12:00:43.654437] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.121 12:00:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63426 /var/tmp/spdk-nbd.sock 00:06:57.121 12:00:44 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63426 ']' 00:06:57.121 12:00:44 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.121 12:00:44 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.121 12:00:44 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.121 12:00:44 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.121 12:00:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.121 12:00:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.121 12:00:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:57.121 12:00:45 event.app_repeat -- event/event.sh@39 -- # killprocess 63426 00:06:57.121 12:00:45 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 63426 ']' 00:06:57.121 12:00:45 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 63426 00:06:57.121 12:00:45 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:57.121 12:00:45 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.121 12:00:45 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63426 00:06:57.121 12:00:45 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.122 killing process with pid 63426 00:06:57.122 12:00:45 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.122 12:00:45 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63426' 00:06:57.122 12:00:45 event.app_repeat -- common/autotest_common.sh@969 -- # kill 63426 00:06:57.122 12:00:45 event.app_repeat -- common/autotest_common.sh@974 -- # wait 63426 00:06:58.496 spdk_app_start is called in Round 0. 00:06:58.496 Shutdown signal received, stop current app iteration 00:06:58.496 Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 reinitialization... 00:06:58.496 spdk_app_start is called in Round 1. 00:06:58.496 Shutdown signal received, stop current app iteration 00:06:58.496 Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 reinitialization... 00:06:58.496 spdk_app_start is called in Round 2. 00:06:58.497 Shutdown signal received, stop current app iteration 00:06:58.497 Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 reinitialization... 00:06:58.497 spdk_app_start is called in Round 3. 00:06:58.497 Shutdown signal received, stop current app iteration 00:06:58.497 12:00:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:58.497 12:00:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:58.497 00:06:58.497 real 0m19.132s 00:06:58.497 user 0m39.044s 00:06:58.497 sys 0m2.971s 00:06:58.497 12:00:46 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.497 ************************************ 00:06:58.497 12:00:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.497 END TEST app_repeat 00:06:58.497 ************************************ 00:06:58.497 12:00:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:58.497 12:00:46 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:58.497 12:00:46 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.497 12:00:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.497 12:00:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.497 ************************************ 00:06:58.497 START TEST cpu_locks 00:06:58.497 ************************************ 00:06:58.497 12:00:46 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:58.755 * Looking for test storage... 00:06:58.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:58.755 12:00:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:58.755 12:00:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:58.755 12:00:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:58.755 12:00:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:58.755 12:00:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.755 12:00:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.755 12:00:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.755 ************************************ 00:06:58.755 START TEST default_locks 00:06:58.755 ************************************ 00:06:58.755 12:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:58.755 12:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63867 00:06:58.755 12:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.755 12:00:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 63867 00:06:58.755 12:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 63867 ']' 00:06:58.755 12:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.755 12:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.755 12:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.755 12:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.756 12:00:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.756 [2024-07-26 12:00:46.623291] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:06:58.756 [2024-07-26 12:00:46.623426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63867 ] 00:06:59.014 [2024-07-26 12:00:46.795549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.273 [2024-07-26 12:00:47.018586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.209 12:00:47 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.209 12:00:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:00.209 12:00:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 63867 00:07:00.209 12:00:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 63867 00:07:00.209 12:00:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.468 12:00:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 63867 00:07:00.468 12:00:48 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 63867 ']' 00:07:00.468 12:00:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 63867 00:07:00.468 12:00:48 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:00.468 12:00:48 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.468 12:00:48 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63867 00:07:00.468 12:00:48 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.468 12:00:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.468 killing process with pid 63867 00:07:00.468 12:00:48 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63867' 00:07:00.468 12:00:48 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 63867 00:07:00.468 12:00:48 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 63867 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63867 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 63867 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 63867 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 63867 ']' 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.002 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (63867) - No such process 00:07:03.002 ERROR: process (pid: 63867) is no longer running 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:03.002 00:07:03.002 real 0m4.413s 00:07:03.002 user 0m4.335s 00:07:03.002 sys 0m0.657s 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.002 12:00:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.002 ************************************ 00:07:03.002 END TEST default_locks 00:07:03.002 ************************************ 00:07:03.261 12:00:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:03.262 12:00:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.262 12:00:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.262 12:00:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.262 ************************************ 00:07:03.262 START TEST default_locks_via_rpc 00:07:03.262 ************************************ 00:07:03.262 12:00:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:03.262 12:00:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63942 00:07:03.262 12:00:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 63942 00:07:03.262 12:00:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.262 12:00:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63942 ']' 00:07:03.262 12:00:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.262 12:00:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.262 12:00:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.262 12:00:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.262 12:00:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.262 [2024-07-26 12:00:51.107850] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:07:03.262 [2024-07-26 12:00:51.108001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63942 ] 00:07:03.520 [2024-07-26 12:00:51.270236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.777 [2024-07-26 12:00:51.514896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 63942 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 63942 00:07:04.713 12:00:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.280 12:00:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 63942 00:07:05.280 12:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 63942 ']' 00:07:05.280 12:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 63942 00:07:05.280 12:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:05.280 12:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.280 12:00:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63942 00:07:05.280 12:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.280 12:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.280 killing process with pid 63942 00:07:05.280 12:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63942' 00:07:05.280 12:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 63942 00:07:05.280 12:00:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 63942 00:07:07.808 00:07:07.809 real 0m4.544s 00:07:07.809 user 0m4.487s 00:07:07.809 sys 0m0.699s 00:07:07.809 12:00:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.809 ************************************ 00:07:07.809 END TEST default_locks_via_rpc 00:07:07.809 ************************************ 00:07:07.809 12:00:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.809 12:00:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:07.809 12:00:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.809 12:00:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.809 12:00:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.809 ************************************ 00:07:07.809 START TEST non_locking_app_on_locked_coremask 00:07:07.809 ************************************ 00:07:07.809 12:00:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:07.809 12:00:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=64023 00:07:07.809 12:00:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 64023 /var/tmp/spdk.sock 00:07:07.809 12:00:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.809 12:00:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64023 ']' 00:07:07.809 12:00:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.809 12:00:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.809 12:00:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.809 12:00:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.809 12:00:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.809 [2024-07-26 12:00:55.725716] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:07:07.809 [2024-07-26 12:00:55.725850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64023 ] 00:07:08.066 [2024-07-26 12:00:55.896437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.324 [2024-07-26 12:00:56.127609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.261 12:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.261 12:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:09.261 12:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=64043 00:07:09.261 12:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 64043 /var/tmp/spdk2.sock 00:07:09.261 12:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64043 ']' 00:07:09.261 12:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.261 12:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.261 12:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.261 12:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.261 12:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.261 12:00:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:09.261 [2024-07-26 12:00:57.175963] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:07:09.261 [2024-07-26 12:00:57.176105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64043 ] 00:07:09.520 [2024-07-26 12:00:57.344464] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.520 [2024-07-26 12:00:57.344528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.088 [2024-07-26 12:00:57.797510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.989 12:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.989 12:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:11.989 12:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 64023 00:07:11.989 12:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64023 00:07:11.990 12:00:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.925 12:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 64023 00:07:12.925 12:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64023 ']' 00:07:12.925 12:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64023 00:07:12.925 12:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:12.925 12:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.925 12:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64023 00:07:12.925 12:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.925 killing process with pid 64023 00:07:12.925 12:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.925 12:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64023' 00:07:12.925 12:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64023 00:07:12.925 12:01:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64023 00:07:18.216 12:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 64043 00:07:18.216 12:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64043 ']' 00:07:18.216 12:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64043 00:07:18.216 12:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:18.216 12:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.216 12:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64043 00:07:18.216 12:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.216 12:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.216 killing process with pid 64043 00:07:18.216 12:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64043' 00:07:18.216 12:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64043 00:07:18.216 12:01:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64043 00:07:20.749 00:07:20.749 real 0m12.507s 00:07:20.749 user 0m12.736s 00:07:20.749 sys 0m1.414s 00:07:20.749 12:01:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.749 ************************************ 00:07:20.749 END TEST non_locking_app_on_locked_coremask 00:07:20.749 ************************************ 00:07:20.749 12:01:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.749 12:01:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:20.749 12:01:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.749 12:01:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.749 12:01:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.749 ************************************ 00:07:20.749 START TEST locking_app_on_unlocked_coremask 00:07:20.749 ************************************ 00:07:20.749 12:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:20.749 12:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64198 00:07:20.749 12:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64198 /var/tmp/spdk.sock 00:07:20.749 12:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:20.749 12:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64198 ']' 00:07:20.749 12:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.749 12:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.749 12:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.749 12:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.749 12:01:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.749 [2024-07-26 12:01:08.289046] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:07:20.749 [2024-07-26 12:01:08.289189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64198 ] 00:07:20.749 [2024-07-26 12:01:08.448410] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.749 [2024-07-26 12:01:08.448477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.749 [2024-07-26 12:01:08.679640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.685 12:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.685 12:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:21.685 12:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64214 00:07:21.685 12:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64214 /var/tmp/spdk2.sock 00:07:21.685 12:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:21.685 12:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64214 ']' 00:07:21.685 12:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.685 12:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.685 12:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.685 12:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.685 12:01:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.943 [2024-07-26 12:01:09.701817] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:07:21.943 [2024-07-26 12:01:09.701946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64214 ] 00:07:21.943 [2024-07-26 12:01:09.869792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.511 [2024-07-26 12:01:10.336900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.410 12:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.410 12:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:24.410 12:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64214 00:07:24.410 12:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64214 00:07:24.410 12:01:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.346 12:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64198 00:07:25.346 12:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64198 ']' 00:07:25.346 12:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 64198 00:07:25.346 12:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:25.346 12:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.346 12:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64198 00:07:25.346 12:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.346 12:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.346 12:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64198' 00:07:25.346 killing process with pid 64198 00:07:25.346 12:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 64198 00:07:25.346 12:01:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 64198 00:07:30.668 12:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64214 00:07:30.668 12:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64214 ']' 00:07:30.668 12:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 64214 00:07:30.668 12:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:30.668 12:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.668 12:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64214 00:07:30.668 12:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.668 12:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.668 killing process with pid 64214 00:07:30.668 12:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64214' 00:07:30.668 12:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 64214 00:07:30.668 12:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 64214 00:07:32.570 00:07:32.570 real 0m12.354s 00:07:32.570 user 0m12.614s 00:07:32.570 sys 0m1.359s 00:07:32.570 12:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.570 12:01:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.570 ************************************ 00:07:32.570 END TEST locking_app_on_unlocked_coremask 00:07:32.570 ************************************ 00:07:32.828 12:01:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:32.828 12:01:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.828 12:01:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.828 12:01:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.828 ************************************ 00:07:32.828 START TEST locking_app_on_locked_coremask 00:07:32.828 ************************************ 00:07:32.828 12:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:32.828 12:01:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64373 00:07:32.828 12:01:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:32.828 12:01:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64373 /var/tmp/spdk.sock 00:07:32.828 12:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64373 ']' 00:07:32.828 12:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.828 12:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.828 12:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.828 12:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.828 12:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.828 [2024-07-26 12:01:20.717285] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:07:32.828 [2024-07-26 12:01:20.717440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64373 ] 00:07:33.087 [2024-07-26 12:01:20.887984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.345 [2024-07-26 12:01:21.123611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64389 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64389 /var/tmp/spdk2.sock 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64389 /var/tmp/spdk2.sock 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64389 /var/tmp/spdk2.sock 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64389 ']' 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.283 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.283 [2024-07-26 12:01:22.144035] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:07:34.283 [2024-07-26 12:01:22.144185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64389 ] 00:07:34.541 [2024-07-26 12:01:22.309639] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64373 has claimed it. 00:07:34.541 [2024-07-26 12:01:22.309726] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:34.799 ERROR: process (pid: 64389) is no longer running 00:07:34.799 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64389) - No such process 00:07:34.799 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.799 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:34.799 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:34.799 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:34.799 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:34.799 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:34.799 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64373 00:07:34.799 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64373 00:07:34.799 12:01:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:35.366 12:01:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64373 00:07:35.366 12:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64373 ']' 00:07:35.366 12:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64373 00:07:35.366 12:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:35.366 12:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.366 12:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64373 00:07:35.366 12:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.366 12:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.366 killing process with pid 64373 00:07:35.366 12:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64373' 00:07:35.366 12:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64373 00:07:35.366 12:01:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64373 00:07:37.894 00:07:37.894 real 0m5.103s 00:07:37.894 user 0m5.188s 00:07:37.894 sys 0m0.861s 00:07:37.894 12:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.894 12:01:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.894 ************************************ 00:07:37.894 END TEST locking_app_on_locked_coremask 00:07:37.894 ************************************ 00:07:37.894 12:01:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:37.894 12:01:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.894 12:01:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.894 12:01:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.894 ************************************ 00:07:37.894 START TEST locking_overlapped_coremask 00:07:37.894 ************************************ 00:07:37.894 12:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:37.894 12:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64464 00:07:37.894 12:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64464 /var/tmp/spdk.sock 00:07:37.894 12:01:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:37.894 12:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64464 ']' 00:07:37.894 12:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.894 12:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.895 12:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.895 12:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.895 12:01:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.153 [2024-07-26 12:01:25.893491] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:07:38.153 [2024-07-26 12:01:25.893632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64464 ] 00:07:38.153 [2024-07-26 12:01:26.064650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.412 [2024-07-26 12:01:26.305904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.412 [2024-07-26 12:01:26.306061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.412 [2024-07-26 12:01:26.306086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64487 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64487 /var/tmp/spdk2.sock 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64487 /var/tmp/spdk2.sock 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64487 /var/tmp/spdk2.sock 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64487 ']' 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:39.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.347 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.347 [2024-07-26 12:01:27.308985] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:07:39.347 [2024-07-26 12:01:27.309113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64487 ] 00:07:39.605 [2024-07-26 12:01:27.476839] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64464 has claimed it. 00:07:39.605 [2024-07-26 12:01:27.476935] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:40.170 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64487) - No such process 00:07:40.170 ERROR: process (pid: 64487) is no longer running 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64464 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 64464 ']' 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 64464 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64464 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64464' 00:07:40.170 killing process with pid 64464 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 64464 00:07:40.170 12:01:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 64464 00:07:42.732 00:07:42.732 real 0m4.670s 00:07:42.732 user 0m12.059s 00:07:42.732 sys 0m0.603s 00:07:42.732 12:01:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.732 12:01:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.732 ************************************ 00:07:42.732 END TEST locking_overlapped_coremask 00:07:42.732 ************************************ 00:07:42.732 12:01:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:42.732 12:01:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.732 12:01:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.732 12:01:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.732 ************************************ 00:07:42.732 START TEST locking_overlapped_coremask_via_rpc 00:07:42.732 ************************************ 00:07:42.732 12:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:42.732 12:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64552 00:07:42.732 12:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64552 /var/tmp/spdk.sock 00:07:42.732 12:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:42.732 12:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64552 ']' 00:07:42.732 12:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.732 12:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.732 12:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.732 12:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.732 12:01:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.732 [2024-07-26 12:01:30.628784] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:07:42.732 [2024-07-26 12:01:30.628927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64552 ] 00:07:42.990 [2024-07-26 12:01:30.801909] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:42.990 [2024-07-26 12:01:30.802002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.249 [2024-07-26 12:01:31.027260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.249 [2024-07-26 12:01:31.027328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.249 [2024-07-26 12:01:31.027365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.184 12:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.184 12:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:44.184 12:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64570 00:07:44.184 12:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64570 /var/tmp/spdk2.sock 00:07:44.184 12:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:44.184 12:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64570 ']' 00:07:44.184 12:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:44.184 12:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:44.184 12:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:44.184 12:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.184 12:01:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.184 [2024-07-26 12:01:32.048294] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:07:44.184 [2024-07-26 12:01:32.048423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64570 ] 00:07:44.443 [2024-07-26 12:01:32.215670] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:44.443 [2024-07-26 12:01:32.215725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:44.701 [2024-07-26 12:01:32.669500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.701 [2024-07-26 12:01:32.669677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.701 [2024-07-26 12:01:32.669710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.603 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.603 [2024-07-26 12:01:34.569400] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64552 has claimed it. 00:07:46.603 request: 00:07:46.603 { 00:07:46.603 "method": "framework_enable_cpumask_locks", 00:07:46.603 "req_id": 1 00:07:46.603 } 00:07:46.603 Got JSON-RPC error response 00:07:46.603 response: 00:07:46.603 { 00:07:46.603 "code": -32603, 00:07:46.603 "message": "Failed to claim CPU core: 2" 00:07:46.861 } 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64552 /var/tmp/spdk.sock 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64552 ']' 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64570 /var/tmp/spdk2.sock 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64570 ']' 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.861 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.119 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.119 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:47.119 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:47.119 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:47.119 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:47.119 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:47.119 00:07:47.119 real 0m4.459s 00:07:47.119 user 0m1.126s 00:07:47.119 sys 0m0.245s 00:07:47.119 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.119 12:01:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.119 ************************************ 00:07:47.119 END TEST locking_overlapped_coremask_via_rpc 00:07:47.119 ************************************ 00:07:47.119 12:01:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:47.119 12:01:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64552 ]] 00:07:47.119 12:01:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64552 00:07:47.119 12:01:35 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64552 ']' 00:07:47.119 12:01:35 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64552 00:07:47.119 12:01:35 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:47.119 12:01:35 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.120 12:01:35 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64552 00:07:47.120 killing process with pid 64552 00:07:47.120 12:01:35 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.120 12:01:35 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.120 12:01:35 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64552' 00:07:47.120 12:01:35 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64552 00:07:47.120 12:01:35 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64552 00:07:49.665 12:01:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64570 ]] 00:07:49.665 12:01:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64570 00:07:49.665 12:01:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64570 ']' 00:07:49.665 12:01:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64570 00:07:49.665 12:01:37 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:49.665 12:01:37 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.665 12:01:37 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64570 00:07:49.922 killing process with pid 64570 00:07:49.922 12:01:37 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:49.922 12:01:37 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:49.922 12:01:37 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64570' 00:07:49.922 12:01:37 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64570 00:07:49.923 12:01:37 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64570 00:07:52.453 12:01:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:52.453 Process with pid 64552 is not found 00:07:52.453 12:01:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:52.453 12:01:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64552 ]] 00:07:52.453 12:01:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64552 00:07:52.453 12:01:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64552 ']' 00:07:52.453 12:01:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64552 00:07:52.453 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64552) - No such process 00:07:52.453 12:01:40 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64552 is not found' 00:07:52.453 12:01:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64570 ]] 00:07:52.453 12:01:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64570 00:07:52.453 12:01:40 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64570 ']' 00:07:52.453 12:01:40 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64570 00:07:52.453 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64570) - No such process 00:07:52.453 Process with pid 64570 is not found 00:07:52.453 12:01:40 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64570 is not found' 00:07:52.453 12:01:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:52.453 00:07:52.453 real 0m53.745s 00:07:52.453 user 1m28.180s 00:07:52.453 sys 0m7.023s 00:07:52.453 ************************************ 00:07:52.453 END TEST cpu_locks 00:07:52.453 ************************************ 00:07:52.453 12:01:40 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.453 12:01:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.453 ************************************ 00:07:52.453 END TEST event 00:07:52.453 ************************************ 00:07:52.453 00:07:52.453 real 1m23.542s 00:07:52.453 user 2m22.286s 00:07:52.453 sys 0m11.233s 00:07:52.453 12:01:40 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.453 12:01:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:52.453 12:01:40 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:52.453 12:01:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:52.454 12:01:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.454 12:01:40 -- common/autotest_common.sh@10 -- # set +x 00:07:52.454 ************************************ 00:07:52.454 START TEST thread 00:07:52.454 ************************************ 00:07:52.454 12:01:40 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:52.454 * Looking for test storage... 00:07:52.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:52.454 12:01:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:52.454 12:01:40 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:52.454 12:01:40 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.454 12:01:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:52.454 ************************************ 00:07:52.454 START TEST thread_poller_perf 00:07:52.454 ************************************ 00:07:52.454 12:01:40 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:52.713 [2024-07-26 12:01:40.434095] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:07:52.713 [2024-07-26 12:01:40.434216] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64757 ] 00:07:52.713 [2024-07-26 12:01:40.604963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.971 [2024-07-26 12:01:40.832810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.971 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:54.344 ====================================== 00:07:54.344 busy:2498019182 (cyc) 00:07:54.344 total_run_count: 388000 00:07:54.344 tsc_hz: 2490000000 (cyc) 00:07:54.344 ====================================== 00:07:54.344 poller_cost: 6438 (cyc), 2585 (nsec) 00:07:54.344 00:07:54.344 real 0m1.887s 00:07:54.344 user 0m1.653s 00:07:54.344 sys 0m0.125s 00:07:54.344 12:01:42 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.344 12:01:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:54.344 ************************************ 00:07:54.344 END TEST thread_poller_perf 00:07:54.344 ************************************ 00:07:54.601 12:01:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:54.601 12:01:42 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:54.601 12:01:42 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.601 12:01:42 thread -- common/autotest_common.sh@10 -- # set +x 00:07:54.601 ************************************ 00:07:54.601 START TEST thread_poller_perf 00:07:54.601 ************************************ 00:07:54.601 12:01:42 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:54.601 [2024-07-26 12:01:42.392512] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:07:54.601 [2024-07-26 12:01:42.392633] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64799 ] 00:07:54.601 [2024-07-26 12:01:42.561017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.859 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:54.859 [2024-07-26 12:01:42.804298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.250 ====================================== 00:07:56.250 busy:2494061998 (cyc) 00:07:56.250 total_run_count: 5085000 00:07:56.250 tsc_hz: 2490000000 (cyc) 00:07:56.250 ====================================== 00:07:56.250 poller_cost: 490 (cyc), 196 (nsec) 00:07:56.508 00:07:56.508 real 0m1.895s 00:07:56.508 user 0m1.657s 00:07:56.508 sys 0m0.130s 00:07:56.508 12:01:44 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.508 12:01:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:56.508 ************************************ 00:07:56.508 END TEST thread_poller_perf 00:07:56.508 ************************************ 00:07:56.508 12:01:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:56.508 00:07:56.508 real 0m4.049s 00:07:56.508 user 0m3.390s 00:07:56.508 sys 0m0.440s 00:07:56.509 12:01:44 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.509 ************************************ 00:07:56.509 END TEST thread 00:07:56.509 ************************************ 00:07:56.509 12:01:44 thread -- common/autotest_common.sh@10 -- # set +x 00:07:56.509 12:01:44 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:56.509 12:01:44 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:56.509 12:01:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.509 12:01:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.509 12:01:44 -- common/autotest_common.sh@10 -- # set +x 00:07:56.509 ************************************ 00:07:56.509 START TEST app_cmdline 00:07:56.509 ************************************ 00:07:56.509 12:01:44 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:56.509 * Looking for test storage... 00:07:56.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:56.509 12:01:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:56.509 12:01:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64880 00:07:56.509 12:01:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64880 00:07:56.509 12:01:44 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 64880 ']' 00:07:56.509 12:01:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:56.509 12:01:44 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.509 12:01:44 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.509 12:01:44 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.509 12:01:44 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.509 12:01:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:56.767 [2024-07-26 12:01:44.591178] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:07:56.767 [2024-07-26 12:01:44.591303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64880 ] 00:07:57.025 [2024-07-26 12:01:44.761351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.025 [2024-07-26 12:01:44.994144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.960 12:01:45 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.960 12:01:45 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:57.960 12:01:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:58.218 { 00:07:58.218 "version": "SPDK v24.09-pre git sha1 1beb86cd6", 00:07:58.218 "fields": { 00:07:58.218 "major": 24, 00:07:58.218 "minor": 9, 00:07:58.218 "patch": 0, 00:07:58.218 "suffix": "-pre", 00:07:58.218 "commit": "1beb86cd6" 00:07:58.218 } 00:07:58.218 } 00:07:58.218 12:01:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:58.218 12:01:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:58.218 12:01:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:58.218 12:01:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:58.218 12:01:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:58.218 12:01:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:58.218 12:01:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:58.218 12:01:46 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.218 12:01:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:58.218 12:01:46 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.218 12:01:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:58.218 12:01:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:58.218 12:01:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:58.218 12:01:46 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:58.218 12:01:46 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:58.218 12:01:46 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:58.218 12:01:46 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.218 12:01:46 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:58.218 12:01:46 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.218 12:01:46 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:58.218 12:01:46 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:58.218 12:01:46 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:58.218 12:01:46 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:58.218 12:01:46 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:58.476 request: 00:07:58.476 { 00:07:58.476 "method": "env_dpdk_get_mem_stats", 00:07:58.476 "req_id": 1 00:07:58.476 } 00:07:58.476 Got JSON-RPC error response 00:07:58.476 response: 00:07:58.476 { 00:07:58.476 "code": -32601, 00:07:58.476 "message": "Method not found" 00:07:58.476 } 00:07:58.476 12:01:46 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:58.476 12:01:46 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:58.476 12:01:46 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:58.476 12:01:46 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:58.476 12:01:46 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64880 00:07:58.476 12:01:46 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 64880 ']' 00:07:58.476 12:01:46 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 64880 00:07:58.476 12:01:46 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:58.476 12:01:46 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.476 12:01:46 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64880 00:07:58.476 12:01:46 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.476 12:01:46 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.476 killing process with pid 64880 00:07:58.476 12:01:46 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64880' 00:07:58.476 12:01:46 app_cmdline -- common/autotest_common.sh@969 -- # kill 64880 00:07:58.476 12:01:46 app_cmdline -- common/autotest_common.sh@974 -- # wait 64880 00:08:01.008 00:08:01.008 real 0m4.517s 00:08:01.008 user 0m4.708s 00:08:01.008 sys 0m0.626s 00:08:01.008 12:01:48 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.008 12:01:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:01.008 ************************************ 00:08:01.008 END TEST app_cmdline 00:08:01.008 ************************************ 00:08:01.008 12:01:48 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:01.008 12:01:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.008 12:01:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.008 12:01:48 -- common/autotest_common.sh@10 -- # set +x 00:08:01.008 ************************************ 00:08:01.008 START TEST version 00:08:01.008 ************************************ 00:08:01.008 12:01:48 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:01.267 * Looking for test storage... 00:08:01.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:01.267 12:01:49 version -- app/version.sh@17 -- # get_header_version major 00:08:01.267 12:01:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.267 12:01:49 version -- app/version.sh@14 -- # cut -f2 00:08:01.267 12:01:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.267 12:01:49 version -- app/version.sh@17 -- # major=24 00:08:01.267 12:01:49 version -- app/version.sh@18 -- # get_header_version minor 00:08:01.267 12:01:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.267 12:01:49 version -- app/version.sh@14 -- # cut -f2 00:08:01.267 12:01:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.267 12:01:49 version -- app/version.sh@18 -- # minor=9 00:08:01.267 12:01:49 version -- app/version.sh@19 -- # get_header_version patch 00:08:01.267 12:01:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.267 12:01:49 version -- app/version.sh@14 -- # cut -f2 00:08:01.267 12:01:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.267 12:01:49 version -- app/version.sh@19 -- # patch=0 00:08:01.267 12:01:49 version -- app/version.sh@20 -- # get_header_version suffix 00:08:01.267 12:01:49 version -- app/version.sh@14 -- # cut -f2 00:08:01.267 12:01:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.267 12:01:49 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.267 12:01:49 version -- app/version.sh@20 -- # suffix=-pre 00:08:01.267 12:01:49 version -- app/version.sh@22 -- # version=24.9 00:08:01.267 12:01:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:01.267 12:01:49 version -- app/version.sh@28 -- # version=24.9rc0 00:08:01.267 12:01:49 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:01.267 12:01:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:01.267 12:01:49 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:01.267 12:01:49 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:01.267 ************************************ 00:08:01.267 END TEST version 00:08:01.267 ************************************ 00:08:01.267 00:08:01.267 real 0m0.219s 00:08:01.267 user 0m0.122s 00:08:01.267 sys 0m0.146s 00:08:01.267 12:01:49 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.267 12:01:49 version -- common/autotest_common.sh@10 -- # set +x 00:08:01.267 12:01:49 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:08:01.267 12:01:49 -- spdk/autotest.sh@202 -- # uname -s 00:08:01.267 12:01:49 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:08:01.267 12:01:49 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:01.267 12:01:49 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:01.267 12:01:49 -- spdk/autotest.sh@215 -- # '[' 1 -eq 1 ']' 00:08:01.267 12:01:49 -- spdk/autotest.sh@216 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:01.267 12:01:49 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:01.267 12:01:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.267 12:01:49 -- common/autotest_common.sh@10 -- # set +x 00:08:01.267 ************************************ 00:08:01.267 START TEST blockdev_nvme 00:08:01.267 ************************************ 00:08:01.267 12:01:49 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:01.526 * Looking for test storage... 00:08:01.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:01.526 12:01:49 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=65058 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:01.526 12:01:49 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 65058 00:08:01.526 12:01:49 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 65058 ']' 00:08:01.526 12:01:49 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.526 12:01:49 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.526 12:01:49 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.526 12:01:49 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.526 12:01:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.526 [2024-07-26 12:01:49.490184] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:08:01.526 [2024-07-26 12:01:49.490312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65058 ] 00:08:01.785 [2024-07-26 12:01:49.660207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.073 [2024-07-26 12:01:49.880384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.044 12:01:50 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.044 12:01:50 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:08:03.044 12:01:50 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:03.044 12:01:50 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:08:03.044 12:01:50 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:03.044 12:01:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:03.044 12:01:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:03.045 12:01:50 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:03.045 12:01:50 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.045 12:01:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.302 12:01:51 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.302 12:01:51 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:03.302 12:01:51 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.302 12:01:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.302 12:01:51 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.303 12:01:51 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:08:03.303 12:01:51 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:03.303 12:01:51 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.303 12:01:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.303 12:01:51 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.303 12:01:51 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:03.303 12:01:51 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.303 12:01:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.303 12:01:51 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.303 12:01:51 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:03.303 12:01:51 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.303 12:01:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.303 12:01:51 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.561 12:01:51 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:03.561 12:01:51 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:03.561 12:01:51 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:03.561 12:01:51 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.561 12:01:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.561 12:01:51 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.561 12:01:51 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:03.561 12:01:51 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:03.562 12:01:51 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "df18131c-1ea5-46e6-ae90-fa5109b3b91f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "df18131c-1ea5-46e6-ae90-fa5109b3b91f",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "4ad04545-930d-4be9-8da1-be8902bfb550"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4ad04545-930d-4be9-8da1-be8902bfb550",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "29adaf67-f527-475f-98fb-7f77a31c5560"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "29adaf67-f527-475f-98fb-7f77a31c5560",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "200c5b81-caee-4ee5-b881-7faeab3faec3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "200c5b81-caee-4ee5-b881-7faeab3faec3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f9f9c6d7-4307-4de8-8c90-03e831a3573a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f9f9c6d7-4307-4de8-8c90-03e831a3573a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "6bce8025-67b2-46b7-8262-1c88c400da23"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6bce8025-67b2-46b7-8262-1c88c400da23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:03.562 12:01:51 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:03.562 12:01:51 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:03.562 12:01:51 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:03.562 12:01:51 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 65058 00:08:03.562 12:01:51 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 65058 ']' 00:08:03.562 12:01:51 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 65058 00:08:03.562 12:01:51 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:08:03.562 12:01:51 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.562 12:01:51 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65058 00:08:03.562 12:01:51 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.562 12:01:51 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.562 killing process with pid 65058 00:08:03.562 12:01:51 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65058' 00:08:03.562 12:01:51 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 65058 00:08:03.562 12:01:51 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 65058 00:08:06.097 12:01:53 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:06.097 12:01:53 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:06.097 12:01:53 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:06.097 12:01:53 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.097 12:01:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.097 ************************************ 00:08:06.097 START TEST bdev_hello_world 00:08:06.097 ************************************ 00:08:06.097 12:01:53 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:06.355 [2024-07-26 12:01:54.086967] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:08:06.355 [2024-07-26 12:01:54.087110] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65159 ] 00:08:06.355 [2024-07-26 12:01:54.263477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.613 [2024-07-26 12:01:54.514391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.545 [2024-07-26 12:01:55.242224] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:07.545 [2024-07-26 12:01:55.242287] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:07.545 [2024-07-26 12:01:55.242315] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:07.545 [2024-07-26 12:01:55.245546] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:07.545 [2024-07-26 12:01:55.246029] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:07.545 [2024-07-26 12:01:55.246068] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:07.545 [2024-07-26 12:01:55.246321] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:07.545 00:08:07.545 [2024-07-26 12:01:55.246358] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:08.919 00:08:08.919 real 0m2.657s 00:08:08.919 user 0m2.266s 00:08:08.919 sys 0m0.280s 00:08:08.919 ************************************ 00:08:08.919 END TEST bdev_hello_world 00:08:08.919 ************************************ 00:08:08.919 12:01:56 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.919 12:01:56 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:08.919 12:01:56 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:08.919 12:01:56 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:08.919 12:01:56 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.919 12:01:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.919 ************************************ 00:08:08.919 START TEST bdev_bounds 00:08:08.919 ************************************ 00:08:08.919 12:01:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:08:08.919 12:01:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=65205 00:08:08.919 Process bdevio pid: 65205 00:08:08.919 12:01:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:08.919 12:01:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:08.919 12:01:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 65205' 00:08:08.919 12:01:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 65205 00:08:08.919 12:01:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 65205 ']' 00:08:08.919 12:01:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.919 12:01:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.919 12:01:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.919 12:01:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.919 12:01:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:08.919 [2024-07-26 12:01:56.815136] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:08:08.919 [2024-07-26 12:01:56.815268] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65205 ] 00:08:09.177 [2024-07-26 12:01:56.992078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:09.464 [2024-07-26 12:01:57.245990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.464 [2024-07-26 12:01:57.246159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.464 [2024-07-26 12:01:57.246237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.397 12:01:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.397 12:01:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:08:10.397 12:01:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:10.397 I/O targets: 00:08:10.397 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:10.397 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:10.397 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:10.397 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:10.397 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:10.397 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:10.397 00:08:10.397 00:08:10.397 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.397 http://cunit.sourceforge.net/ 00:08:10.397 00:08:10.397 00:08:10.397 Suite: bdevio tests on: Nvme3n1 00:08:10.397 Test: blockdev write read block ...passed 00:08:10.397 Test: blockdev write zeroes read block ...passed 00:08:10.397 Test: blockdev write zeroes read no split ...passed 00:08:10.397 Test: blockdev write zeroes read split ...passed 00:08:10.397 Test: blockdev write zeroes read split partial ...passed 00:08:10.397 Test: blockdev reset ...[2024-07-26 12:01:58.246404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:10.397 [2024-07-26 12:01:58.250252] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:10.397 passed 00:08:10.397 Test: blockdev write read 8 blocks ...passed 00:08:10.397 Test: blockdev write read size > 128k ...passed 00:08:10.397 Test: blockdev write read invalid size ...passed 00:08:10.397 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:10.397 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:10.397 Test: blockdev write read max offset ...passed 00:08:10.397 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:10.397 Test: blockdev writev readv 8 blocks ...passed 00:08:10.397 Test: blockdev writev readv 30 x 1block ...passed 00:08:10.397 Test: blockdev writev readv block ...passed 00:08:10.398 Test: blockdev writev readv size > 128k ...passed 00:08:10.398 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:10.398 Test: blockdev comparev and writev ...[2024-07-26 12:01:58.260044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26c80a000 len:0x1000 00:08:10.398 [2024-07-26 12:01:58.260133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:10.398 passed 00:08:10.398 Test: blockdev nvme passthru rw ...passed 00:08:10.398 Test: blockdev nvme passthru vendor specific ...[2024-07-26 12:01:58.261434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:10.398 [2024-07-26 12:01:58.261491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:10.398 passed 00:08:10.398 Test: blockdev nvme admin passthru ...passed 00:08:10.398 Test: blockdev copy ...passed 00:08:10.398 Suite: bdevio tests on: Nvme2n3 00:08:10.398 Test: blockdev write read block ...passed 00:08:10.398 Test: blockdev write zeroes read block ...passed 00:08:10.398 Test: blockdev write zeroes read no split ...passed 00:08:10.398 Test: blockdev write zeroes read split ...passed 00:08:10.398 Test: blockdev write zeroes read split partial ...passed 00:08:10.398 Test: blockdev reset ...[2024-07-26 12:01:58.355674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:10.398 [2024-07-26 12:01:58.360022] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:10.398 passed 00:08:10.398 Test: blockdev write read 8 blocks ...passed 00:08:10.398 Test: blockdev write read size > 128k ...passed 00:08:10.398 Test: blockdev write read invalid size ...passed 00:08:10.398 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:10.398 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:10.398 Test: blockdev write read max offset ...passed 00:08:10.398 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:10.398 Test: blockdev writev readv 8 blocks ...passed 00:08:10.398 Test: blockdev writev readv 30 x 1block ...passed 00:08:10.398 Test: blockdev writev readv block ...passed 00:08:10.398 Test: blockdev writev readv size > 128k ...passed 00:08:10.398 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:10.398 Test: blockdev comparev and writev ...[2024-07-26 12:01:58.369602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x24ee04000 len:0x1000 00:08:10.398 [2024-07-26 12:01:58.369801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:10.398 passed 00:08:10.398 Test: blockdev nvme passthru rw ...passed 00:08:10.398 Test: blockdev nvme passthru vendor specific ...[2024-07-26 12:01:58.371100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:10.398 [2024-07-26 12:01:58.371257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:08:10.398 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:08:10.657 passed 00:08:10.657 Test: blockdev copy ...passed 00:08:10.657 Suite: bdevio tests on: Nvme2n2 00:08:10.657 Test: blockdev write read block ...passed 00:08:10.657 Test: blockdev write zeroes read block ...passed 00:08:10.657 Test: blockdev write zeroes read no split ...passed 00:08:10.657 Test: blockdev write zeroes read split ...passed 00:08:10.657 Test: blockdev write zeroes read split partial ...passed 00:08:10.657 Test: blockdev reset ...[2024-07-26 12:01:58.450521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:10.657 [2024-07-26 12:01:58.454791] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:10.657 passed 00:08:10.657 Test: blockdev write read 8 blocks ...passed 00:08:10.657 Test: blockdev write read size > 128k ...passed 00:08:10.657 Test: blockdev write read invalid size ...passed 00:08:10.657 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:10.657 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:10.657 Test: blockdev write read max offset ...passed 00:08:10.657 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:10.657 Test: blockdev writev readv 8 blocks ...passed 00:08:10.657 Test: blockdev writev readv 30 x 1block ...passed 00:08:10.657 Test: blockdev writev readv block ...passed 00:08:10.657 Test: blockdev writev readv size > 128k ...passed 00:08:10.657 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:10.657 Test: blockdev comparev and writev ...[2024-07-26 12:01:58.463179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27e83a000 len:0x1000 00:08:10.657 [2024-07-26 12:01:58.463225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:10.657 passed 00:08:10.657 Test: blockdev nvme passthru rw ...passed 00:08:10.657 Test: blockdev nvme passthru vendor specific ...passed 00:08:10.657 Test: blockdev nvme admin passthru ...[2024-07-26 12:01:58.464083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:10.657 [2024-07-26 12:01:58.464133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:10.657 passed 00:08:10.657 Test: blockdev copy ...passed 00:08:10.657 Suite: bdevio tests on: Nvme2n1 00:08:10.657 Test: blockdev write read block ...passed 00:08:10.657 Test: blockdev write zeroes read block ...passed 00:08:10.657 Test: blockdev write zeroes read no split ...passed 00:08:10.657 Test: blockdev write zeroes read split ...passed 00:08:10.657 Test: blockdev write zeroes read split partial ...passed 00:08:10.657 Test: blockdev reset ...[2024-07-26 12:01:58.546068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:10.657 [2024-07-26 12:01:58.550084] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:10.657 passed 00:08:10.657 Test: blockdev write read 8 blocks ...passed 00:08:10.657 Test: blockdev write read size > 128k ...passed 00:08:10.657 Test: blockdev write read invalid size ...passed 00:08:10.657 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:10.657 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:10.657 Test: blockdev write read max offset ...passed 00:08:10.657 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:10.657 Test: blockdev writev readv 8 blocks ...passed 00:08:10.657 Test: blockdev writev readv 30 x 1block ...passed 00:08:10.657 Test: blockdev writev readv block ...passed 00:08:10.657 Test: blockdev writev readv size > 128k ...passed 00:08:10.657 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:10.657 Test: blockdev comparev and writev ...[2024-07-26 12:01:58.559818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27e834000 len:0x1000 00:08:10.657 [2024-07-26 12:01:58.560004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:10.657 passed 00:08:10.657 Test: blockdev nvme passthru rw ...passed 00:08:10.657 Test: blockdev nvme passthru vendor specific ...passed 00:08:10.657 Test: blockdev nvme admin passthru ...[2024-07-26 12:01:58.560963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:10.657 [2024-07-26 12:01:58.561004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:10.657 passed 00:08:10.657 Test: blockdev copy ...passed 00:08:10.657 Suite: bdevio tests on: Nvme1n1 00:08:10.657 Test: blockdev write read block ...passed 00:08:10.657 Test: blockdev write zeroes read block ...passed 00:08:10.657 Test: blockdev write zeroes read no split ...passed 00:08:10.657 Test: blockdev write zeroes read split ...passed 00:08:10.915 Test: blockdev write zeroes read split partial ...passed 00:08:10.915 Test: blockdev reset ...[2024-07-26 12:01:58.645717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:10.915 [2024-07-26 12:01:58.649740] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:10.915 passed 00:08:10.915 Test: blockdev write read 8 blocks ...passed 00:08:10.915 Test: blockdev write read size > 128k ...passed 00:08:10.915 Test: blockdev write read invalid size ...passed 00:08:10.915 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:10.915 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:10.915 Test: blockdev write read max offset ...passed 00:08:10.915 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:10.915 Test: blockdev writev readv 8 blocks ...passed 00:08:10.915 Test: blockdev writev readv 30 x 1block ...passed 00:08:10.915 Test: blockdev writev readv block ...passed 00:08:10.915 Test: blockdev writev readv size > 128k ...passed 00:08:10.915 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:10.915 Test: blockdev comparev and writev ...[2024-07-26 12:01:58.659740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27e830000 len:0x1000 00:08:10.915 [2024-07-26 12:01:58.659927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:08:10.915 Test: blockdev nvme passthru rw ...0 sqhd:0018 p:1 m:0 dnr:1 00:08:10.915 passed 00:08:10.915 Test: blockdev nvme passthru vendor specific ...[2024-07-26 12:01:58.661185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:10.915 [2024-07-26 12:01:58.661339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:08:10.915 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:08:10.915 passed 00:08:10.915 Test: blockdev copy ...passed 00:08:10.915 Suite: bdevio tests on: Nvme0n1 00:08:10.915 Test: blockdev write read block ...passed 00:08:10.915 Test: blockdev write zeroes read block ...passed 00:08:10.915 Test: blockdev write zeroes read no split ...passed 00:08:10.915 Test: blockdev write zeroes read split ...passed 00:08:10.915 Test: blockdev write zeroes read split partial ...passed 00:08:10.915 Test: blockdev reset ...[2024-07-26 12:01:58.744880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:10.915 [2024-07-26 12:01:58.748957] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:10.915 passed 00:08:10.916 Test: blockdev write read 8 blocks ...passed 00:08:10.916 Test: blockdev write read size > 128k ...passed 00:08:10.916 Test: blockdev write read invalid size ...passed 00:08:10.916 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:10.916 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:10.916 Test: blockdev write read max offset ...passed 00:08:10.916 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:10.916 Test: blockdev writev readv 8 blocks ...passed 00:08:10.916 Test: blockdev writev readv 30 x 1block ...passed 00:08:10.916 Test: blockdev writev readv block ...passed 00:08:10.916 Test: blockdev writev readv size > 128k ...passed 00:08:10.916 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:10.916 Test: blockdev comparev and writev ...passed[2024-07-26 12:01:58.757941] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:10.916 separate metadata which is not supported yet. 00:08:10.916 00:08:10.916 Test: blockdev nvme passthru rw ...passed 00:08:10.916 Test: blockdev nvme passthru vendor specific ...[2024-07-26 12:01:58.758948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:10.916 [2024-07-26 12:01:58.759138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:10.916 passed 00:08:10.916 Test: blockdev nvme admin passthru ...passed 00:08:10.916 Test: blockdev copy ...passed 00:08:10.916 00:08:10.916 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.916 suites 6 6 n/a 0 0 00:08:10.916 tests 138 138 138 0 0 00:08:10.916 asserts 893 893 893 0 n/a 00:08:10.916 00:08:10.916 Elapsed time = 1.727 seconds 00:08:10.916 0 00:08:10.916 12:01:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 65205 00:08:10.916 12:01:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 65205 ']' 00:08:10.916 12:01:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 65205 00:08:10.916 12:01:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:08:10.916 12:01:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.916 12:01:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65205 00:08:10.916 killing process with pid 65205 00:08:10.916 12:01:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.916 12:01:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.916 12:01:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65205' 00:08:10.916 12:01:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 65205 00:08:10.916 12:01:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 65205 00:08:12.290 ************************************ 00:08:12.290 END TEST bdev_bounds 00:08:12.290 ************************************ 00:08:12.290 12:01:59 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:12.290 00:08:12.290 real 0m3.248s 00:08:12.290 user 0m7.902s 00:08:12.290 sys 0m0.449s 00:08:12.290 12:01:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.290 12:01:59 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:12.290 12:02:00 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:12.290 12:02:00 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:12.290 12:02:00 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.290 12:02:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:12.290 ************************************ 00:08:12.290 START TEST bdev_nbd 00:08:12.290 ************************************ 00:08:12.290 12:02:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:12.290 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:12.290 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:12.290 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.290 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:12.290 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:12.290 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:12.290 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:12.290 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:12.290 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:12.290 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:12.291 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:12.291 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:12.291 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:12.291 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:12.291 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:12.291 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=65271 00:08:12.291 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:12.291 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 65271 /var/tmp/spdk-nbd.sock 00:08:12.291 12:02:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:12.291 12:02:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 65271 ']' 00:08:12.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:12.291 12:02:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:12.291 12:02:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.291 12:02:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:12.291 12:02:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.291 12:02:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:12.291 [2024-07-26 12:02:00.161753] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:08:12.291 [2024-07-26 12:02:00.161890] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.549 [2024-07-26 12:02:00.342415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.807 [2024-07-26 12:02:00.586886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.385 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.385 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:08:13.385 12:02:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:13.385 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.385 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:13.385 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:13.386 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:13.386 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.386 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:13.386 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:13.386 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:13.386 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:13.386 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:13.386 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:13.386 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:13.644 1+0 records in 00:08:13.644 1+0 records out 00:08:13.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533095 s, 7.7 MB/s 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:13.644 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:13.903 1+0 records in 00:08:13.903 1+0 records out 00:08:13.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649287 s, 6.3 MB/s 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:13.903 12:02:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:14.161 1+0 records in 00:08:14.161 1+0 records out 00:08:14.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705089 s, 5.8 MB/s 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:14.161 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:14.419 1+0 records in 00:08:14.419 1+0 records out 00:08:14.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612764 s, 6.7 MB/s 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:14.419 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:14.677 1+0 records in 00:08:14.677 1+0 records out 00:08:14.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000795796 s, 5.1 MB/s 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:14.677 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:14.678 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:14.678 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:14.936 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:14.936 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:14.936 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:14.936 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:14.936 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:14.936 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:14.936 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:14.936 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:14.936 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:14.936 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:14.936 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:15.194 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:15.194 1+0 records in 00:08:15.194 1+0 records out 00:08:15.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730942 s, 5.6 MB/s 00:08:15.194 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.194 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:15.194 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.194 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:15.194 12:02:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:15.194 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:15.194 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:15.194 12:02:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:15.194 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:15.194 { 00:08:15.194 "nbd_device": "/dev/nbd0", 00:08:15.194 "bdev_name": "Nvme0n1" 00:08:15.194 }, 00:08:15.194 { 00:08:15.194 "nbd_device": "/dev/nbd1", 00:08:15.194 "bdev_name": "Nvme1n1" 00:08:15.194 }, 00:08:15.194 { 00:08:15.194 "nbd_device": "/dev/nbd2", 00:08:15.194 "bdev_name": "Nvme2n1" 00:08:15.194 }, 00:08:15.194 { 00:08:15.194 "nbd_device": "/dev/nbd3", 00:08:15.194 "bdev_name": "Nvme2n2" 00:08:15.194 }, 00:08:15.194 { 00:08:15.194 "nbd_device": "/dev/nbd4", 00:08:15.194 "bdev_name": "Nvme2n3" 00:08:15.194 }, 00:08:15.194 { 00:08:15.194 "nbd_device": "/dev/nbd5", 00:08:15.194 "bdev_name": "Nvme3n1" 00:08:15.194 } 00:08:15.194 ]' 00:08:15.194 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:15.194 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:15.194 { 00:08:15.194 "nbd_device": "/dev/nbd0", 00:08:15.194 "bdev_name": "Nvme0n1" 00:08:15.194 }, 00:08:15.194 { 00:08:15.194 "nbd_device": "/dev/nbd1", 00:08:15.194 "bdev_name": "Nvme1n1" 00:08:15.194 }, 00:08:15.195 { 00:08:15.195 "nbd_device": "/dev/nbd2", 00:08:15.195 "bdev_name": "Nvme2n1" 00:08:15.195 }, 00:08:15.195 { 00:08:15.195 "nbd_device": "/dev/nbd3", 00:08:15.195 "bdev_name": "Nvme2n2" 00:08:15.195 }, 00:08:15.195 { 00:08:15.195 "nbd_device": "/dev/nbd4", 00:08:15.195 "bdev_name": "Nvme2n3" 00:08:15.195 }, 00:08:15.195 { 00:08:15.195 "nbd_device": "/dev/nbd5", 00:08:15.195 "bdev_name": "Nvme3n1" 00:08:15.195 } 00:08:15.195 ]' 00:08:15.195 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:15.453 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:15.453 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.453 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:15.453 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:15.453 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:15.453 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.453 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.710 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:15.969 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:15.969 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.969 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.969 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:15.969 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:15.969 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:15.969 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:15.969 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.969 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.969 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:15.969 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:15.969 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.969 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.969 12:02:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:16.227 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:16.227 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:16.227 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:16.227 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.227 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.227 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:16.227 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:16.227 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.227 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.227 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:16.486 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:16.486 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:16.486 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:16.486 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.486 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.486 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:16.486 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:16.486 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.486 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.486 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:16.751 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:16.751 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:16.751 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:16.751 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.751 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.751 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:16.751 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:16.751 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.751 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:16.751 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.751 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:17.020 12:02:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:17.280 /dev/nbd0 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:17.280 1+0 records in 00:08:17.280 1+0 records out 00:08:17.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535639 s, 7.6 MB/s 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:17.280 12:02:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:17.539 /dev/nbd1 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:17.539 1+0 records in 00:08:17.539 1+0 records out 00:08:17.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000753844 s, 5.4 MB/s 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:17.539 12:02:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:17.797 /dev/nbd10 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:17.797 1+0 records in 00:08:17.797 1+0 records out 00:08:17.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000851442 s, 4.8 MB/s 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:17.797 12:02:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:18.054 /dev/nbd11 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:18.054 1+0 records in 00:08:18.054 1+0 records out 00:08:18.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483352 s, 8.5 MB/s 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:18.054 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:18.620 /dev/nbd12 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:18.620 1+0 records in 00:08:18.620 1+0 records out 00:08:18.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645978 s, 6.3 MB/s 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:18.620 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:18.879 /dev/nbd13 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:18.879 1+0 records in 00:08:18.879 1+0 records out 00:08:18.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000914995 s, 4.5 MB/s 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.879 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:19.138 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:19.138 { 00:08:19.138 "nbd_device": "/dev/nbd0", 00:08:19.138 "bdev_name": "Nvme0n1" 00:08:19.138 }, 00:08:19.138 { 00:08:19.138 "nbd_device": "/dev/nbd1", 00:08:19.138 "bdev_name": "Nvme1n1" 00:08:19.138 }, 00:08:19.138 { 00:08:19.138 "nbd_device": "/dev/nbd10", 00:08:19.138 "bdev_name": "Nvme2n1" 00:08:19.138 }, 00:08:19.138 { 00:08:19.138 "nbd_device": "/dev/nbd11", 00:08:19.138 "bdev_name": "Nvme2n2" 00:08:19.138 }, 00:08:19.138 { 00:08:19.138 "nbd_device": "/dev/nbd12", 00:08:19.138 "bdev_name": "Nvme2n3" 00:08:19.138 }, 00:08:19.138 { 00:08:19.138 "nbd_device": "/dev/nbd13", 00:08:19.138 "bdev_name": "Nvme3n1" 00:08:19.138 } 00:08:19.138 ]' 00:08:19.138 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:19.138 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:19.138 { 00:08:19.138 "nbd_device": "/dev/nbd0", 00:08:19.138 "bdev_name": "Nvme0n1" 00:08:19.138 }, 00:08:19.138 { 00:08:19.138 "nbd_device": "/dev/nbd1", 00:08:19.138 "bdev_name": "Nvme1n1" 00:08:19.138 }, 00:08:19.138 { 00:08:19.138 "nbd_device": "/dev/nbd10", 00:08:19.139 "bdev_name": "Nvme2n1" 00:08:19.139 }, 00:08:19.139 { 00:08:19.139 "nbd_device": "/dev/nbd11", 00:08:19.139 "bdev_name": "Nvme2n2" 00:08:19.139 }, 00:08:19.139 { 00:08:19.139 "nbd_device": "/dev/nbd12", 00:08:19.139 "bdev_name": "Nvme2n3" 00:08:19.139 }, 00:08:19.139 { 00:08:19.139 "nbd_device": "/dev/nbd13", 00:08:19.139 "bdev_name": "Nvme3n1" 00:08:19.139 } 00:08:19.139 ]' 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:19.139 /dev/nbd1 00:08:19.139 /dev/nbd10 00:08:19.139 /dev/nbd11 00:08:19.139 /dev/nbd12 00:08:19.139 /dev/nbd13' 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:19.139 /dev/nbd1 00:08:19.139 /dev/nbd10 00:08:19.139 /dev/nbd11 00:08:19.139 /dev/nbd12 00:08:19.139 /dev/nbd13' 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:19.139 256+0 records in 00:08:19.139 256+0 records out 00:08:19.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495596 s, 212 MB/s 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.139 12:02:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:19.139 256+0 records in 00:08:19.139 256+0 records out 00:08:19.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121679 s, 8.6 MB/s 00:08:19.139 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.139 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:19.398 256+0 records in 00:08:19.398 256+0 records out 00:08:19.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122651 s, 8.5 MB/s 00:08:19.398 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.398 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:19.398 256+0 records in 00:08:19.398 256+0 records out 00:08:19.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123696 s, 8.5 MB/s 00:08:19.398 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.398 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:19.657 256+0 records in 00:08:19.657 256+0 records out 00:08:19.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121198 s, 8.7 MB/s 00:08:19.657 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.657 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:19.657 256+0 records in 00:08:19.657 256+0 records out 00:08:19.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125222 s, 8.4 MB/s 00:08:19.657 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:19.657 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:19.918 256+0 records in 00:08:19.919 256+0 records out 00:08:19.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12827 s, 8.2 MB/s 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:19.919 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:20.178 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:20.178 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:20.178 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:20.178 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:20.178 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:20.178 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:20.178 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:20.178 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:20.178 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:20.178 12:02:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:20.437 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:20.437 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:20.437 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:20.437 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:20.437 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:20.437 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:20.437 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:20.437 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:20.437 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:20.437 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:20.438 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:20.438 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:20.438 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:20.438 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:20.438 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:20.438 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:20.438 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:20.438 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:20.438 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:20.438 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:20.696 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:20.696 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:20.696 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:20.696 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:20.696 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:20.696 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:20.696 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:20.696 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:20.696 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:20.696 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:20.955 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:20.955 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:20.955 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:20.955 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:20.955 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:20.955 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:20.955 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:20.955 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:20.955 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:20.955 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:21.214 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:21.214 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:21.214 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:21.214 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:21.214 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:21.214 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:21.214 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:21.214 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:21.214 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:21.214 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.214 12:02:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:21.214 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:21.472 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:21.472 malloc_lvol_verify 00:08:21.731 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:21.731 42f39580-5fa4-4321-bfa0-c98991aa7599 00:08:21.731 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:21.990 3c37043a-d7f2-4fea-9c59-879c7fa14659 00:08:21.990 12:02:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:22.248 /dev/nbd0 00:08:22.248 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:22.248 mke2fs 1.46.5 (30-Dec-2021) 00:08:22.248 Discarding device blocks: 0/4096 done 00:08:22.248 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:22.248 00:08:22.248 Allocating group tables: 0/1 done 00:08:22.248 Writing inode tables: 0/1 done 00:08:22.248 Creating journal (1024 blocks): done 00:08:22.248 Writing superblocks and filesystem accounting information: 0/1 done 00:08:22.248 00:08:22.248 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:22.248 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:22.248 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.248 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:22.248 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:22.248 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:22.248 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:22.248 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 65271 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 65271 ']' 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 65271 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65271 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:22.507 killing process with pid 65271 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65271' 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 65271 00:08:22.507 12:02:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 65271 00:08:23.915 12:02:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:23.915 00:08:23.915 real 0m11.661s 00:08:23.915 user 0m15.064s 00:08:23.915 sys 0m4.633s 00:08:23.915 12:02:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.915 12:02:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:23.915 ************************************ 00:08:23.915 END TEST bdev_nbd 00:08:23.915 ************************************ 00:08:23.915 12:02:11 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:23.915 12:02:11 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:08:23.915 skipping fio tests on NVMe due to multi-ns failures. 00:08:23.915 12:02:11 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:23.915 12:02:11 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:23.915 12:02:11 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:23.915 12:02:11 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:23.915 12:02:11 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.915 12:02:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.915 ************************************ 00:08:23.915 START TEST bdev_verify 00:08:23.915 ************************************ 00:08:23.915 12:02:11 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:23.915 [2024-07-26 12:02:11.867964] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:08:23.915 [2024-07-26 12:02:11.868130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65667 ] 00:08:24.174 [2024-07-26 12:02:12.038688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:24.433 [2024-07-26 12:02:12.273105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.433 [2024-07-26 12:02:12.273216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.375 Running I/O for 5 seconds... 00:08:30.691 00:08:30.691 Latency(us) 00:08:30.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.691 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.691 Verification LBA range: start 0x0 length 0xbd0bd 00:08:30.691 Nvme0n1 : 5.06 1821.22 7.11 0.00 0.00 70153.60 13475.68 72431.76 00:08:30.691 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.691 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:30.691 Nvme0n1 : 5.03 1856.72 7.25 0.00 0.00 68700.54 14739.02 69905.07 00:08:30.691 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.691 Verification LBA range: start 0x0 length 0xa0000 00:08:30.691 Nvme1n1 : 5.06 1820.72 7.11 0.00 0.00 70110.00 12844.00 68220.61 00:08:30.691 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.691 Verification LBA range: start 0xa0000 length 0xa0000 00:08:30.691 Nvme1n1 : 5.07 1868.84 7.30 0.00 0.00 68194.90 9001.33 60640.54 00:08:30.691 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.691 Verification LBA range: start 0x0 length 0x80000 00:08:30.691 Nvme2n1 : 5.06 1820.23 7.11 0.00 0.00 69997.86 12317.61 71589.53 00:08:30.691 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.691 Verification LBA range: start 0x80000 length 0x80000 00:08:30.691 Nvme2n1 : 5.07 1868.41 7.30 0.00 0.00 68033.23 8948.69 61482.77 00:08:30.691 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.691 Verification LBA range: start 0x0 length 0x80000 00:08:30.691 Nvme2n2 : 5.06 1819.80 7.11 0.00 0.00 69846.67 12580.81 74116.22 00:08:30.691 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.691 Verification LBA range: start 0x80000 length 0x80000 00:08:30.691 Nvme2n2 : 5.07 1867.97 7.30 0.00 0.00 67914.41 9053.97 62746.11 00:08:30.691 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.691 Verification LBA range: start 0x0 length 0x80000 00:08:30.691 Nvme2n3 : 5.07 1819.29 7.11 0.00 0.00 69762.13 11843.86 74958.44 00:08:30.691 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.691 Verification LBA range: start 0x80000 length 0x80000 00:08:30.691 Nvme2n3 : 5.07 1867.57 7.30 0.00 0.00 67839.01 9106.61 65272.80 00:08:30.691 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.691 Verification LBA range: start 0x0 length 0x20000 00:08:30.691 Nvme3n1 : 5.07 1818.78 7.10 0.00 0.00 69682.07 11475.38 75379.56 00:08:30.691 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.691 Verification LBA range: start 0x20000 length 0x20000 00:08:30.691 Nvme3n1 : 5.07 1867.17 7.29 0.00 0.00 67790.36 9053.97 66536.15 00:08:30.691 =================================================================================================================== 00:08:30.691 Total : 22116.71 86.39 0.00 0.00 68989.75 8948.69 75379.56 00:08:32.158 00:08:32.158 real 0m7.984s 00:08:32.158 user 0m14.507s 00:08:32.158 sys 0m0.313s 00:08:32.158 12:02:19 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.158 12:02:19 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:32.158 ************************************ 00:08:32.158 END TEST bdev_verify 00:08:32.158 ************************************ 00:08:32.158 12:02:19 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:32.158 12:02:19 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:32.158 12:02:19 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.158 12:02:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.158 ************************************ 00:08:32.158 START TEST bdev_verify_big_io 00:08:32.158 ************************************ 00:08:32.158 12:02:19 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:32.158 [2024-07-26 12:02:19.933678] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:08:32.158 [2024-07-26 12:02:19.933892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65771 ] 00:08:32.158 [2024-07-26 12:02:20.104559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:32.417 [2024-07-26 12:02:20.343488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.417 [2024-07-26 12:02:20.343524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.354 Running I/O for 5 seconds... 00:08:39.920 00:08:39.920 Latency(us) 00:08:39.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.920 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:39.920 Verification LBA range: start 0x0 length 0xbd0b 00:08:39.920 Nvme0n1 : 5.64 139.80 8.74 0.00 0.00 872424.29 29899.16 1340829.71 00:08:39.920 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:39.920 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:39.920 Nvme0n1 : 5.68 155.28 9.71 0.00 0.00 804975.44 22529.64 902870.26 00:08:39.920 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:39.920 Verification LBA range: start 0x0 length 0xa000 00:08:39.920 Nvme1n1 : 5.68 144.15 9.01 0.00 0.00 834388.83 48217.65 1354305.39 00:08:39.920 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:39.920 Verification LBA range: start 0xa000 length 0xa000 00:08:39.920 Nvme1n1 : 5.68 154.19 9.64 0.00 0.00 789635.55 61903.88 758006.75 00:08:39.920 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:39.920 Verification LBA range: start 0x0 length 0x8000 00:08:39.920 Nvme2n1 : 5.69 148.81 9.30 0.00 0.00 791665.45 36636.99 1381256.74 00:08:39.920 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:39.920 Verification LBA range: start 0x8000 length 0x8000 00:08:39.920 Nvme2n1 : 5.68 153.94 9.62 0.00 0.00 769020.46 61903.88 741162.15 00:08:39.921 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:39.921 Verification LBA range: start 0x0 length 0x8000 00:08:39.921 Nvme2n2 : 5.74 153.06 9.57 0.00 0.00 747214.72 38321.45 1421683.77 00:08:39.921 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:39.921 Verification LBA range: start 0x8000 length 0x8000 00:08:39.921 Nvme2n2 : 5.73 156.47 9.78 0.00 0.00 735096.60 43164.27 795064.85 00:08:39.921 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:39.921 Verification LBA range: start 0x0 length 0x8000 00:08:39.921 Nvme2n3 : 5.78 163.20 10.20 0.00 0.00 680231.34 35794.76 1105005.39 00:08:39.921 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:39.921 Verification LBA range: start 0x8000 length 0x8000 00:08:39.921 Nvme2n3 : 5.73 161.12 10.07 0.00 0.00 698567.11 44006.50 781589.18 00:08:39.921 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:39.921 Verification LBA range: start 0x0 length 0x2000 00:08:39.921 Nvme3n1 : 5.84 199.77 12.49 0.00 0.00 543285.19 516.52 909608.10 00:08:39.921 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:39.921 Verification LBA range: start 0x2000 length 0x2000 00:08:39.921 Nvme3n1 : 5.78 177.29 11.08 0.00 0.00 619725.67 4948.10 788327.02 00:08:39.921 =================================================================================================================== 00:08:39.921 Total : 1907.08 119.19 0.00 0.00 731414.05 516.52 1421683.77 00:08:41.303 00:08:41.303 real 0m9.318s 00:08:41.303 user 0m17.121s 00:08:41.303 sys 0m0.344s 00:08:41.303 12:02:29 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.303 12:02:29 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:41.303 ************************************ 00:08:41.303 END TEST bdev_verify_big_io 00:08:41.303 ************************************ 00:08:41.303 12:02:29 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:41.303 12:02:29 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:41.303 12:02:29 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.303 12:02:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.303 ************************************ 00:08:41.303 START TEST bdev_write_zeroes 00:08:41.303 ************************************ 00:08:41.303 12:02:29 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:41.562 [2024-07-26 12:02:29.329813] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:08:41.563 [2024-07-26 12:02:29.329946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65892 ] 00:08:41.563 [2024-07-26 12:02:29.502393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.130 [2024-07-26 12:02:29.816683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.697 Running I/O for 1 seconds... 00:08:44.068 00:08:44.068 Latency(us) 00:08:44.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.068 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:44.068 Nvme0n1 : 1.01 9568.55 37.38 0.00 0.00 13339.38 8422.30 40216.47 00:08:44.068 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:44.068 Nvme1n1 : 1.01 9558.75 37.34 0.00 0.00 13337.81 8896.05 41479.81 00:08:44.068 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:44.068 Nvme2n1 : 1.01 9585.12 37.44 0.00 0.00 13251.83 7264.23 41269.26 00:08:44.068 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:44.068 Nvme2n2 : 1.02 9608.79 37.53 0.00 0.00 13126.10 6106.17 41479.81 00:08:44.068 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:44.068 Nvme2n3 : 1.02 9620.67 37.58 0.00 0.00 13062.80 6369.36 41479.81 00:08:44.068 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:44.068 Nvme3n1 : 1.03 9656.17 37.72 0.00 0.00 12998.40 5948.25 41269.26 00:08:44.068 =================================================================================================================== 00:08:44.068 Total : 57598.06 224.99 0.00 0.00 13184.77 5948.25 41479.81 00:08:45.002 00:08:45.002 real 0m3.683s 00:08:45.002 user 0m3.180s 00:08:45.002 sys 0m0.380s 00:08:45.002 12:02:32 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.002 12:02:32 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:45.002 ************************************ 00:08:45.002 END TEST bdev_write_zeroes 00:08:45.002 ************************************ 00:08:45.002 12:02:32 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:45.002 12:02:32 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:45.002 12:02:32 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.002 12:02:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.261 ************************************ 00:08:45.261 START TEST bdev_json_nonenclosed 00:08:45.261 ************************************ 00:08:45.261 12:02:32 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:45.261 [2024-07-26 12:02:33.084515] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:08:45.261 [2024-07-26 12:02:33.084635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65946 ] 00:08:45.519 [2024-07-26 12:02:33.257302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.519 [2024-07-26 12:02:33.490315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.519 [2024-07-26 12:02:33.490402] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:45.519 [2024-07-26 12:02:33.490426] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:45.519 [2024-07-26 12:02:33.490441] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:46.084 00:08:46.084 real 0m0.957s 00:08:46.084 user 0m0.716s 00:08:46.084 sys 0m0.135s 00:08:46.085 12:02:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.085 12:02:33 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:46.085 ************************************ 00:08:46.085 END TEST bdev_json_nonenclosed 00:08:46.085 ************************************ 00:08:46.085 12:02:33 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:46.085 12:02:33 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:46.085 12:02:33 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.085 12:02:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.085 ************************************ 00:08:46.085 START TEST bdev_json_nonarray 00:08:46.085 ************************************ 00:08:46.085 12:02:33 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:46.342 [2024-07-26 12:02:34.084698] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:08:46.342 [2024-07-26 12:02:34.084818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65976 ] 00:08:46.342 [2024-07-26 12:02:34.244863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.599 [2024-07-26 12:02:34.544374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.599 [2024-07-26 12:02:34.544496] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:46.599 [2024-07-26 12:02:34.544537] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:46.600 [2024-07-26 12:02:34.544562] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:47.163 00:08:47.163 real 0m1.066s 00:08:47.163 user 0m0.814s 00:08:47.163 sys 0m0.144s 00:08:47.163 12:02:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.163 12:02:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:47.163 ************************************ 00:08:47.163 END TEST bdev_json_nonarray 00:08:47.163 ************************************ 00:08:47.163 12:02:35 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:08:47.163 12:02:35 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:08:47.163 12:02:35 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:08:47.163 12:02:35 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:47.163 12:02:35 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:08:47.163 12:02:35 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:47.163 12:02:35 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:47.163 12:02:35 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:47.163 12:02:35 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:47.163 12:02:35 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:47.163 12:02:35 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:47.163 00:08:47.163 real 0m45.893s 00:08:47.163 user 1m6.430s 00:08:47.163 sys 0m7.789s 00:08:47.163 12:02:35 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.163 12:02:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.163 ************************************ 00:08:47.163 END TEST blockdev_nvme 00:08:47.163 ************************************ 00:08:47.421 12:02:35 -- spdk/autotest.sh@217 -- # uname -s 00:08:47.421 12:02:35 -- spdk/autotest.sh@217 -- # [[ Linux == Linux ]] 00:08:47.421 12:02:35 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:47.421 12:02:35 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:47.421 12:02:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.421 12:02:35 -- common/autotest_common.sh@10 -- # set +x 00:08:47.421 ************************************ 00:08:47.421 START TEST blockdev_nvme_gpt 00:08:47.421 ************************************ 00:08:47.421 12:02:35 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:47.421 * Looking for test storage... 00:08:47.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66058 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:47.421 12:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 66058 00:08:47.421 12:02:35 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 66058 ']' 00:08:47.421 12:02:35 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.421 12:02:35 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.421 12:02:35 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.421 12:02:35 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.421 12:02:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:47.680 [2024-07-26 12:02:35.472923] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:08:47.680 [2024-07-26 12:02:35.473052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66058 ] 00:08:47.680 [2024-07-26 12:02:35.643553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.938 [2024-07-26 12:02:35.873767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.873 12:02:36 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.873 12:02:36 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:08:48.873 12:02:36 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:48.873 12:02:36 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:08:48.873 12:02:36 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:49.439 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:49.698 Waiting for block devices as requested 00:08:49.957 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:49.957 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:49.957 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:50.215 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:55.511 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:55.511 BYT; 00:08:55.511 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:55.511 BYT; 00:08:55.511 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:55.511 12:02:43 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:55.511 12:02:43 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:56.452 The operation has completed successfully. 00:08:56.452 12:02:44 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:57.386 The operation has completed successfully. 00:08:57.386 12:02:45 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:57.984 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:58.920 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.920 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.920 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.920 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.920 12:02:46 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:58.920 12:02:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.920 12:02:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:58.920 [] 00:08:58.920 12:02:46 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.920 12:02:46 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:58.920 12:02:46 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:58.920 12:02:46 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:58.920 12:02:46 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:59.179 12:02:46 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:59.179 12:02:46 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.179 12:02:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.439 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.439 12:02:47 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:59.439 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.439 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.439 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.439 12:02:47 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:08:59.439 12:02:47 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:59.439 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.439 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.439 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.439 12:02:47 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:59.439 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.439 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.439 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.439 12:02:47 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:59.439 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.439 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.439 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.439 12:02:47 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:59.439 12:02:47 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:59.439 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.439 12:02:47 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:59.439 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.699 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.699 12:02:47 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:59.699 12:02:47 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:59.700 12:02:47 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "17959f5f-9e91-40ee-a95f-b201a6fcd770"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "17959f5f-9e91-40ee-a95f-b201a6fcd770",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c4296206-062a-455e-ac5b-9cbb24de5215"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c4296206-062a-455e-ac5b-9cbb24de5215",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "d584b3a6-c497-4881-81de-a576cd17515b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d584b3a6-c497-4881-81de-a576cd17515b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "79230f57-0b7f-4d88-9b68-e4b4c7d28fe8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "79230f57-0b7f-4d88-9b68-e4b4c7d28fe8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "bc7a7614-b995-4304-ac7a-b306ea7eb568"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "bc7a7614-b995-4304-ac7a-b306ea7eb568",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:59.700 12:02:47 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:59.700 12:02:47 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:59.700 12:02:47 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:59.700 12:02:47 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 66058 00:08:59.700 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 66058 ']' 00:08:59.700 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 66058 00:08:59.700 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:08:59.700 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.700 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66058 00:08:59.700 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:59.700 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:59.700 killing process with pid 66058 00:08:59.700 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66058' 00:08:59.700 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 66058 00:08:59.700 12:02:47 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 66058 00:09:02.228 12:02:50 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:02.228 12:02:50 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:02.228 12:02:50 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:02.228 12:02:50 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.228 12:02:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:02.228 ************************************ 00:09:02.228 START TEST bdev_hello_world 00:09:02.228 ************************************ 00:09:02.229 12:02:50 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:02.229 [2024-07-26 12:02:50.151698] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:09:02.229 [2024-07-26 12:02:50.151845] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66707 ] 00:09:02.486 [2024-07-26 12:02:50.309927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.744 [2024-07-26 12:02:50.582735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.311 [2024-07-26 12:02:51.262234] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:03.311 [2024-07-26 12:02:51.262289] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:03.311 [2024-07-26 12:02:51.262311] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:03.311 [2024-07-26 12:02:51.265241] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:03.311 [2024-07-26 12:02:51.265974] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:03.311 [2024-07-26 12:02:51.266006] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:03.311 [2024-07-26 12:02:51.266232] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:03.311 00:09:03.311 [2024-07-26 12:02:51.266260] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:04.702 00:09:04.702 real 0m2.470s 00:09:04.702 user 0m2.107s 00:09:04.702 sys 0m0.254s 00:09:04.702 ************************************ 00:09:04.702 12:02:52 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.702 12:02:52 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:04.702 END TEST bdev_hello_world 00:09:04.702 ************************************ 00:09:04.702 12:02:52 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:09:04.702 12:02:52 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:04.702 12:02:52 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.702 12:02:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:04.702 ************************************ 00:09:04.702 START TEST bdev_bounds 00:09:04.702 ************************************ 00:09:04.702 12:02:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:09:04.702 12:02:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=66757 00:09:04.702 12:02:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:04.702 12:02:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:04.702 Process bdevio pid: 66757 00:09:04.702 12:02:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 66757' 00:09:04.702 12:02:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 66757 00:09:04.702 12:02:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 66757 ']' 00:09:04.702 12:02:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.702 12:02:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:04.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.702 12:02:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.702 12:02:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:04.702 12:02:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:04.971 [2024-07-26 12:02:52.704779] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:09:04.971 [2024-07-26 12:02:52.704909] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66757 ] 00:09:04.971 [2024-07-26 12:02:52.876581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:05.244 [2024-07-26 12:02:53.114740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.244 [2024-07-26 12:02:53.114847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.244 [2024-07-26 12:02:53.114873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.178 12:02:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:06.178 12:02:53 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:09:06.178 12:02:53 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:06.178 I/O targets: 00:09:06.178 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:06.178 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:09:06.178 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:09:06.178 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:06.178 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:06.178 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:06.178 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:06.178 00:09:06.178 00:09:06.178 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.178 http://cunit.sourceforge.net/ 00:09:06.178 00:09:06.178 00:09:06.178 Suite: bdevio tests on: Nvme3n1 00:09:06.178 Test: blockdev write read block ...passed 00:09:06.178 Test: blockdev write zeroes read block ...passed 00:09:06.178 Test: blockdev write zeroes read no split ...passed 00:09:06.178 Test: blockdev write zeroes read split ...passed 00:09:06.178 Test: blockdev write zeroes read split partial ...passed 00:09:06.178 Test: blockdev reset ...[2024-07-26 12:02:54.025730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:06.178 [2024-07-26 12:02:54.029604] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.178 passed 00:09:06.178 Test: blockdev write read 8 blocks ...passed 00:09:06.178 Test: blockdev write read size > 128k ...passed 00:09:06.178 Test: blockdev write read invalid size ...passed 00:09:06.178 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.178 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.178 Test: blockdev write read max offset ...passed 00:09:06.178 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.178 Test: blockdev writev readv 8 blocks ...passed 00:09:06.178 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.178 Test: blockdev writev readv block ...passed 00:09:06.178 Test: blockdev writev readv size > 128k ...passed 00:09:06.178 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.178 Test: blockdev comparev and writev ...[2024-07-26 12:02:54.038661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x268006000 len:0x1000 00:09:06.178 [2024-07-26 12:02:54.038723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.178 passed 00:09:06.178 Test: blockdev nvme passthru rw ...passed 00:09:06.178 Test: blockdev nvme passthru vendor specific ...[2024-07-26 12:02:54.039763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.178 [2024-07-26 12:02:54.039812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.178 passed 00:09:06.178 Test: blockdev nvme admin passthru ...passed 00:09:06.178 Test: blockdev copy ...passed 00:09:06.178 Suite: bdevio tests on: Nvme2n3 00:09:06.178 Test: blockdev write read block ...passed 00:09:06.178 Test: blockdev write zeroes read block ...passed 00:09:06.178 Test: blockdev write zeroes read no split ...passed 00:09:06.178 Test: blockdev write zeroes read split ...passed 00:09:06.178 Test: blockdev write zeroes read split partial ...passed 00:09:06.178 Test: blockdev reset ...[2024-07-26 12:02:54.121483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:06.179 [2024-07-26 12:02:54.125502] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.179 passed 00:09:06.179 Test: blockdev write read 8 blocks ...passed 00:09:06.179 Test: blockdev write read size > 128k ...passed 00:09:06.179 Test: blockdev write read invalid size ...passed 00:09:06.179 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.179 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.179 Test: blockdev write read max offset ...passed 00:09:06.179 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.179 Test: blockdev writev readv 8 blocks ...passed 00:09:06.179 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.179 Test: blockdev writev readv block ...passed 00:09:06.179 Test: blockdev writev readv size > 128k ...passed 00:09:06.179 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.179 Test: blockdev comparev and writev ...[2024-07-26 12:02:54.132558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27bc3c000 len:0x1000 00:09:06.179 [2024-07-26 12:02:54.132606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.179 passed 00:09:06.179 Test: blockdev nvme passthru rw ...passed 00:09:06.179 Test: blockdev nvme passthru vendor specific ...passed 00:09:06.179 Test: blockdev nvme admin passthru ...[2024-07-26 12:02:54.133269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.179 [2024-07-26 12:02:54.133296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.179 passed 00:09:06.179 Test: blockdev copy ...passed 00:09:06.179 Suite: bdevio tests on: Nvme2n2 00:09:06.179 Test: blockdev write read block ...passed 00:09:06.179 Test: blockdev write zeroes read block ...passed 00:09:06.179 Test: blockdev write zeroes read no split ...passed 00:09:06.438 Test: blockdev write zeroes read split ...passed 00:09:06.438 Test: blockdev write zeroes read split partial ...passed 00:09:06.438 Test: blockdev reset ...[2024-07-26 12:02:54.217233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:06.438 passed 00:09:06.438 Test: blockdev write read 8 blocks ...[2024-07-26 12:02:54.220823] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.438 passed 00:09:06.438 Test: blockdev write read size > 128k ...passed 00:09:06.438 Test: blockdev write read invalid size ...passed 00:09:06.438 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.438 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.438 Test: blockdev write read max offset ...passed 00:09:06.438 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.438 Test: blockdev writev readv 8 blocks ...passed 00:09:06.438 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.438 Test: blockdev writev readv block ...passed 00:09:06.438 Test: blockdev writev readv size > 128k ...passed 00:09:06.438 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.438 Test: blockdev comparev and writev ...[2024-07-26 12:02:54.228528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27bc36000 len:0x1000 00:09:06.438 [2024-07-26 12:02:54.228581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.438 passed 00:09:06.438 Test: blockdev nvme passthru rw ...passed 00:09:06.438 Test: blockdev nvme passthru vendor specific ...[2024-07-26 12:02:54.229341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.438 [2024-07-26 12:02:54.229367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.438 passed 00:09:06.438 Test: blockdev nvme admin passthru ...passed 00:09:06.438 Test: blockdev copy ...passed 00:09:06.438 Suite: bdevio tests on: Nvme2n1 00:09:06.438 Test: blockdev write read block ...passed 00:09:06.438 Test: blockdev write zeroes read block ...passed 00:09:06.438 Test: blockdev write zeroes read no split ...passed 00:09:06.438 Test: blockdev write zeroes read split ...passed 00:09:06.438 Test: blockdev write zeroes read split partial ...passed 00:09:06.438 Test: blockdev reset ...[2024-07-26 12:02:54.309287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:06.438 passed 00:09:06.438 Test: blockdev write read 8 blocks ...[2024-07-26 12:02:54.312984] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.438 passed 00:09:06.438 Test: blockdev write read size > 128k ...passed 00:09:06.438 Test: blockdev write read invalid size ...passed 00:09:06.438 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.438 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.438 Test: blockdev write read max offset ...passed 00:09:06.438 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.438 Test: blockdev writev readv 8 blocks ...passed 00:09:06.438 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.438 Test: blockdev writev readv block ...passed 00:09:06.438 Test: blockdev writev readv size > 128k ...passed 00:09:06.438 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.438 Test: blockdev comparev and writev ...[2024-07-26 12:02:54.320324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27bc32000 len:0x1000 00:09:06.438 [2024-07-26 12:02:54.320373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.438 passed 00:09:06.438 Test: blockdev nvme passthru rw ...passed 00:09:06.438 Test: blockdev nvme passthru vendor specific ...passed 00:09:06.438 Test: blockdev nvme admin passthru ...[2024-07-26 12:02:54.320975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:06.438 [2024-07-26 12:02:54.321004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:06.438 passed 00:09:06.438 Test: blockdev copy ...passed 00:09:06.438 Suite: bdevio tests on: Nvme1n1p2 00:09:06.438 Test: blockdev write read block ...passed 00:09:06.438 Test: blockdev write zeroes read block ...passed 00:09:06.438 Test: blockdev write zeroes read no split ...passed 00:09:06.438 Test: blockdev write zeroes read split ...passed 00:09:06.438 Test: blockdev write zeroes read split partial ...passed 00:09:06.438 Test: blockdev reset ...[2024-07-26 12:02:54.405650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:06.438 [2024-07-26 12:02:54.409015] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.438 passed 00:09:06.438 Test: blockdev write read 8 blocks ...passed 00:09:06.438 Test: blockdev write read size > 128k ...passed 00:09:06.438 Test: blockdev write read invalid size ...passed 00:09:06.438 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.438 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.438 Test: blockdev write read max offset ...passed 00:09:06.438 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.438 Test: blockdev writev readv 8 blocks ...passed 00:09:06.438 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.438 Test: blockdev writev readv block ...passed 00:09:06.438 Test: blockdev writev readv size > 128k ...passed 00:09:06.438 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.697 Test: blockdev comparev and writev ...[2024-07-26 12:02:54.416525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x27bc2e000 len:0x1000 00:09:06.697 [2024-07-26 12:02:54.416571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.697 passed 00:09:06.697 Test: blockdev nvme passthru rw ...passed 00:09:06.697 Test: blockdev nvme passthru vendor specific ...passed 00:09:06.697 Test: blockdev nvme admin passthru ...passed 00:09:06.697 Test: blockdev copy ...passed 00:09:06.697 Suite: bdevio tests on: Nvme1n1p1 00:09:06.697 Test: blockdev write read block ...passed 00:09:06.697 Test: blockdev write zeroes read block ...passed 00:09:06.697 Test: blockdev write zeroes read no split ...passed 00:09:06.697 Test: blockdev write zeroes read split ...passed 00:09:06.697 Test: blockdev write zeroes read split partial ...passed 00:09:06.697 Test: blockdev reset ...[2024-07-26 12:02:54.483749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:06.697 passed 00:09:06.697 Test: blockdev write read 8 blocks ...[2024-07-26 12:02:54.487173] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.697 passed 00:09:06.697 Test: blockdev write read size > 128k ...passed 00:09:06.697 Test: blockdev write read invalid size ...passed 00:09:06.697 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.697 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.697 Test: blockdev write read max offset ...passed 00:09:06.697 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.697 Test: blockdev writev readv 8 blocks ...passed 00:09:06.697 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.697 Test: blockdev writev readv block ...passed 00:09:06.697 Test: blockdev writev readv size > 128k ...passed 00:09:06.697 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.697 Test: blockdev comparev and writev ...[2024-07-26 12:02:54.494157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x27900e000 len:0x1000 00:09:06.697 [2024-07-26 12:02:54.494207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:06.697 passed 00:09:06.697 Test: blockdev nvme passthru rw ...passed 00:09:06.697 Test: blockdev nvme passthru vendor specific ...passed 00:09:06.697 Test: blockdev nvme admin passthru ...passed 00:09:06.697 Test: blockdev copy ...passed 00:09:06.697 Suite: bdevio tests on: Nvme0n1 00:09:06.697 Test: blockdev write read block ...passed 00:09:06.697 Test: blockdev write zeroes read block ...passed 00:09:06.697 Test: blockdev write zeroes read no split ...passed 00:09:06.697 Test: blockdev write zeroes read split ...passed 00:09:06.697 Test: blockdev write zeroes read split partial ...passed 00:09:06.697 Test: blockdev reset ...[2024-07-26 12:02:54.560641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:06.697 passed 00:09:06.697 Test: blockdev write read 8 blocks ...[2024-07-26 12:02:54.564206] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.697 passed 00:09:06.697 Test: blockdev write read size > 128k ...passed 00:09:06.697 Test: blockdev write read invalid size ...passed 00:09:06.697 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:06.697 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:06.697 Test: blockdev write read max offset ...passed 00:09:06.697 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:06.697 Test: blockdev writev readv 8 blocks ...passed 00:09:06.697 Test: blockdev writev readv 30 x 1block ...passed 00:09:06.697 Test: blockdev writev readv block ...passed 00:09:06.697 Test: blockdev writev readv size > 128k ...passed 00:09:06.697 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:06.697 Test: blockdev comparev and writev ...[2024-07-26 12:02:54.570417] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:06.697 separate metadata which is not supported yet. 00:09:06.697 passed 00:09:06.697 Test: blockdev nvme passthru rw ...passed 00:09:06.697 Test: blockdev nvme passthru vendor specific ...passed 00:09:06.697 Test: blockdev nvme admin passthru ...[2024-07-26 12:02:54.570929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:06.697 [2024-07-26 12:02:54.570967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:06.697 passed 00:09:06.697 Test: blockdev copy ...passed 00:09:06.697 00:09:06.697 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.697 suites 7 7 n/a 0 0 00:09:06.697 tests 161 161 161 0 0 00:09:06.697 asserts 1025 1025 1025 0 n/a 00:09:06.697 00:09:06.697 Elapsed time = 1.729 seconds 00:09:06.697 0 00:09:06.697 12:02:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 66757 00:09:06.697 12:02:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 66757 ']' 00:09:06.697 12:02:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 66757 00:09:06.697 12:02:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:09:06.697 12:02:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:06.697 12:02:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66757 00:09:06.698 12:02:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:06.698 12:02:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:06.698 killing process with pid 66757 00:09:06.698 12:02:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66757' 00:09:06.698 12:02:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 66757 00:09:06.698 12:02:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 66757 00:09:08.071 12:02:55 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:08.071 00:09:08.071 real 0m3.137s 00:09:08.071 user 0m7.678s 00:09:08.071 sys 0m0.399s 00:09:08.071 12:02:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.071 12:02:55 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:08.071 ************************************ 00:09:08.071 END TEST bdev_bounds 00:09:08.071 ************************************ 00:09:08.071 12:02:55 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:08.071 12:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:08.071 12:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.071 12:02:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:08.071 ************************************ 00:09:08.071 START TEST bdev_nbd 00:09:08.072 ************************************ 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=66823 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 66823 /var/tmp/spdk-nbd.sock 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 66823 ']' 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.072 12:02:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:08.072 [2024-07-26 12:02:55.924392] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:09:08.072 [2024-07-26 12:02:55.924509] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.331 [2024-07-26 12:02:56.094043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.589 [2024-07-26 12:02:56.331578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.156 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.156 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:09:09.156 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:09.156 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.156 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:09.156 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:09.156 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:09.156 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.156 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:09.156 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:09.156 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:09.156 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:09.156 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:09.156 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:09.156 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.415 1+0 records in 00:09:09.415 1+0 records out 00:09:09.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000739606 s, 5.5 MB/s 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:09.415 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:09:09.674 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:09.674 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:09.674 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:09.674 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:09.674 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:09.674 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:09.674 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:09.674 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:09.674 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:09.674 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:09.674 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:09.674 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.932 1+0 records in 00:09:09.932 1+0 records out 00:09:09.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000866906 s, 4.7 MB/s 00:09:09.932 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.932 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:09.932 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.932 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:09.932 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:09.932 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:09.932 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:09.932 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:09:10.191 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:10.191 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:10.191 12:02:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:10.191 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:09:10.191 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:10.191 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:10.191 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:10.191 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:09:10.191 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:10.191 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:10.191 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:10.191 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:10.191 1+0 records in 00:09:10.191 1+0 records out 00:09:10.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744365 s, 5.5 MB/s 00:09:10.191 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.191 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:10.191 12:02:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.191 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:10.191 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:10.191 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:10.191 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:10.191 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:10.450 1+0 records in 00:09:10.450 1+0 records out 00:09:10.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682423 s, 6.0 MB/s 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:10.450 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:10.709 1+0 records in 00:09:10.709 1+0 records out 00:09:10.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000967654 s, 4.2 MB/s 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:10.709 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:10.967 1+0 records in 00:09:10.967 1+0 records out 00:09:10.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805785 s, 5.1 MB/s 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:10.967 12:02:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:11.226 1+0 records in 00:09:11.226 1+0 records out 00:09:11.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672591 s, 6.1 MB/s 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:11.226 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:11.485 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:11.485 { 00:09:11.485 "nbd_device": "/dev/nbd0", 00:09:11.485 "bdev_name": "Nvme0n1" 00:09:11.485 }, 00:09:11.485 { 00:09:11.485 "nbd_device": "/dev/nbd1", 00:09:11.485 "bdev_name": "Nvme1n1p1" 00:09:11.485 }, 00:09:11.485 { 00:09:11.485 "nbd_device": "/dev/nbd2", 00:09:11.485 "bdev_name": "Nvme1n1p2" 00:09:11.485 }, 00:09:11.485 { 00:09:11.485 "nbd_device": "/dev/nbd3", 00:09:11.485 "bdev_name": "Nvme2n1" 00:09:11.485 }, 00:09:11.485 { 00:09:11.485 "nbd_device": "/dev/nbd4", 00:09:11.485 "bdev_name": "Nvme2n2" 00:09:11.485 }, 00:09:11.485 { 00:09:11.485 "nbd_device": "/dev/nbd5", 00:09:11.485 "bdev_name": "Nvme2n3" 00:09:11.485 }, 00:09:11.485 { 00:09:11.485 "nbd_device": "/dev/nbd6", 00:09:11.485 "bdev_name": "Nvme3n1" 00:09:11.485 } 00:09:11.485 ]' 00:09:11.485 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:11.485 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:11.485 { 00:09:11.485 "nbd_device": "/dev/nbd0", 00:09:11.485 "bdev_name": "Nvme0n1" 00:09:11.485 }, 00:09:11.485 { 00:09:11.485 "nbd_device": "/dev/nbd1", 00:09:11.485 "bdev_name": "Nvme1n1p1" 00:09:11.485 }, 00:09:11.485 { 00:09:11.485 "nbd_device": "/dev/nbd2", 00:09:11.485 "bdev_name": "Nvme1n1p2" 00:09:11.485 }, 00:09:11.485 { 00:09:11.485 "nbd_device": "/dev/nbd3", 00:09:11.485 "bdev_name": "Nvme2n1" 00:09:11.485 }, 00:09:11.485 { 00:09:11.485 "nbd_device": "/dev/nbd4", 00:09:11.485 "bdev_name": "Nvme2n2" 00:09:11.485 }, 00:09:11.485 { 00:09:11.485 "nbd_device": "/dev/nbd5", 00:09:11.485 "bdev_name": "Nvme2n3" 00:09:11.485 }, 00:09:11.485 { 00:09:11.485 "nbd_device": "/dev/nbd6", 00:09:11.485 "bdev_name": "Nvme3n1" 00:09:11.485 } 00:09:11.485 ]' 00:09:11.485 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:11.485 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:11.485 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.485 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:11.485 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:11.485 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:11.485 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.485 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:11.744 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:11.744 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:11.744 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:11.744 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.744 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.744 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:11.744 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.744 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.744 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.744 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.003 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:12.262 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:12.262 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.262 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.262 12:02:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:12.262 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:12.262 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:12.262 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:12.262 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.262 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.262 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:12.262 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:12.262 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.262 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.262 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:12.521 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:12.521 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:12.521 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:12.521 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.521 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.521 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:12.521 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:12.521 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.521 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.521 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:12.779 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:12.779 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:12.779 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:12.779 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.779 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.779 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:12.779 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:12.779 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.779 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.779 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:13.037 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:13.037 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:13.037 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:13.037 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:13.037 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:13.037 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:13.037 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:13.037 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:13.037 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:13.037 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.037 12:03:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:13.296 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:13.607 /dev/nbd0 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.607 1+0 records in 00:09:13.607 1+0 records out 00:09:13.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000738 s, 5.6 MB/s 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:13.607 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:13.867 /dev/nbd1 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.867 1+0 records in 00:09:13.867 1+0 records out 00:09:13.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065707 s, 6.2 MB/s 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:13.867 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:13.867 /dev/nbd10 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.126 1+0 records in 00:09:14.126 1+0 records out 00:09:14.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724699 s, 5.7 MB/s 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:14.126 12:03:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:14.126 /dev/nbd11 00:09:14.126 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.385 1+0 records in 00:09:14.385 1+0 records out 00:09:14.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000721777 s, 5.7 MB/s 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:14.385 /dev/nbd12 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:14.385 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.653 1+0 records in 00:09:14.653 1+0 records out 00:09:14.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593915 s, 6.9 MB/s 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:14.653 /dev/nbd13 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:14.653 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.915 1+0 records in 00:09:14.915 1+0 records out 00:09:14.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109571 s, 3.7 MB/s 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:14.915 /dev/nbd14 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:14.915 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:09:15.174 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:15.174 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:15.174 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:15.174 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:15.174 1+0 records in 00:09:15.174 1+0 records out 00:09:15.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683256 s, 6.0 MB/s 00:09:15.174 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.174 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:15.174 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:15.174 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:15.174 12:03:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:15.174 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.174 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:15.174 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.174 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.174 12:03:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.174 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:15.174 { 00:09:15.174 "nbd_device": "/dev/nbd0", 00:09:15.174 "bdev_name": "Nvme0n1" 00:09:15.174 }, 00:09:15.174 { 00:09:15.174 "nbd_device": "/dev/nbd1", 00:09:15.174 "bdev_name": "Nvme1n1p1" 00:09:15.174 }, 00:09:15.174 { 00:09:15.174 "nbd_device": "/dev/nbd10", 00:09:15.174 "bdev_name": "Nvme1n1p2" 00:09:15.174 }, 00:09:15.174 { 00:09:15.174 "nbd_device": "/dev/nbd11", 00:09:15.174 "bdev_name": "Nvme2n1" 00:09:15.174 }, 00:09:15.174 { 00:09:15.174 "nbd_device": "/dev/nbd12", 00:09:15.174 "bdev_name": "Nvme2n2" 00:09:15.174 }, 00:09:15.174 { 00:09:15.174 "nbd_device": "/dev/nbd13", 00:09:15.174 "bdev_name": "Nvme2n3" 00:09:15.174 }, 00:09:15.174 { 00:09:15.174 "nbd_device": "/dev/nbd14", 00:09:15.174 "bdev_name": "Nvme3n1" 00:09:15.174 } 00:09:15.174 ]' 00:09:15.174 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:15.174 { 00:09:15.174 "nbd_device": "/dev/nbd0", 00:09:15.174 "bdev_name": "Nvme0n1" 00:09:15.174 }, 00:09:15.174 { 00:09:15.174 "nbd_device": "/dev/nbd1", 00:09:15.174 "bdev_name": "Nvme1n1p1" 00:09:15.174 }, 00:09:15.174 { 00:09:15.174 "nbd_device": "/dev/nbd10", 00:09:15.174 "bdev_name": "Nvme1n1p2" 00:09:15.174 }, 00:09:15.174 { 00:09:15.174 "nbd_device": "/dev/nbd11", 00:09:15.174 "bdev_name": "Nvme2n1" 00:09:15.174 }, 00:09:15.174 { 00:09:15.174 "nbd_device": "/dev/nbd12", 00:09:15.174 "bdev_name": "Nvme2n2" 00:09:15.174 }, 00:09:15.174 { 00:09:15.174 "nbd_device": "/dev/nbd13", 00:09:15.174 "bdev_name": "Nvme2n3" 00:09:15.174 }, 00:09:15.174 { 00:09:15.174 "nbd_device": "/dev/nbd14", 00:09:15.174 "bdev_name": "Nvme3n1" 00:09:15.174 } 00:09:15.174 ]' 00:09:15.174 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:15.432 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:15.432 /dev/nbd1 00:09:15.432 /dev/nbd10 00:09:15.432 /dev/nbd11 00:09:15.432 /dev/nbd12 00:09:15.432 /dev/nbd13 00:09:15.432 /dev/nbd14' 00:09:15.432 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:15.432 /dev/nbd1 00:09:15.432 /dev/nbd10 00:09:15.432 /dev/nbd11 00:09:15.432 /dev/nbd12 00:09:15.432 /dev/nbd13 00:09:15.432 /dev/nbd14' 00:09:15.432 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:15.432 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:15.432 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:15.432 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:15.432 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:15.432 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:15.432 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:15.432 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:15.432 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:15.432 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:15.432 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:15.432 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:15.432 256+0 records in 00:09:15.432 256+0 records out 00:09:15.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131461 s, 79.8 MB/s 00:09:15.433 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.433 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:15.433 256+0 records in 00:09:15.433 256+0 records out 00:09:15.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128558 s, 8.2 MB/s 00:09:15.433 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.433 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:15.691 256+0 records in 00:09:15.691 256+0 records out 00:09:15.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135172 s, 7.8 MB/s 00:09:15.691 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.691 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:15.691 256+0 records in 00:09:15.691 256+0 records out 00:09:15.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144679 s, 7.2 MB/s 00:09:15.691 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.691 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:15.949 256+0 records in 00:09:15.949 256+0 records out 00:09:15.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136544 s, 7.7 MB/s 00:09:15.949 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.949 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:15.949 256+0 records in 00:09:15.949 256+0 records out 00:09:15.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137197 s, 7.6 MB/s 00:09:15.949 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.949 12:03:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:16.209 256+0 records in 00:09:16.209 256+0 records out 00:09:16.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132665 s, 7.9 MB/s 00:09:16.209 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:16.209 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:16.468 256+0 records in 00:09:16.468 256+0 records out 00:09:16.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133428 s, 7.9 MB/s 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.468 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:16.728 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:16.728 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:16.728 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:16.728 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.728 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.728 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:16.728 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:16.728 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.728 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.728 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:16.987 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:16.987 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:16.987 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:16.987 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.987 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.987 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:16.987 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:16.987 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.987 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.987 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:17.245 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:17.245 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:17.245 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:17.245 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.245 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.245 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:17.245 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:17.245 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.245 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.245 12:03:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:17.245 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:17.245 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:17.245 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:17.245 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.245 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.245 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:17.245 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:17.245 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.245 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.245 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:17.502 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:17.502 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:17.502 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:17.502 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.502 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.502 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:17.502 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:17.502 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.502 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.502 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:17.761 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:17.761 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:17.761 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:17.761 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.761 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.761 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:17.761 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:17.761 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.761 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.761 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:18.049 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:18.049 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:18.049 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:18.049 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.049 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.049 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:18.049 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:18.049 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.049 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:18.049 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.049 12:03:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:18.307 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:18.565 malloc_lvol_verify 00:09:18.565 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:18.824 c93ee843-b974-437f-a4a0-4deae54b4b1b 00:09:18.824 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:19.082 ebbe4181-8425-424f-997f-d03da10d375c 00:09:19.082 12:03:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:19.341 /dev/nbd0 00:09:19.341 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:19.341 mke2fs 1.46.5 (30-Dec-2021) 00:09:19.341 Discarding device blocks: 0/4096 done 00:09:19.341 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:19.341 00:09:19.341 Allocating group tables: 0/1 done 00:09:19.341 Writing inode tables: 0/1 done 00:09:19.341 Creating journal (1024 blocks): done 00:09:19.341 Writing superblocks and filesystem accounting information: 0/1 done 00:09:19.341 00:09:19.341 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:19.341 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:19.341 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.342 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:19.342 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:19.342 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:19.342 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.342 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:19.342 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:19.342 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:19.342 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:19.342 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.342 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.342 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:19.600 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:19.600 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.600 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:19.600 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:19.600 12:03:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 66823 00:09:19.600 12:03:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 66823 ']' 00:09:19.600 12:03:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 66823 00:09:19.600 12:03:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:09:19.600 12:03:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:19.600 12:03:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66823 00:09:19.600 12:03:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:19.600 12:03:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:19.600 killing process with pid 66823 00:09:19.600 12:03:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66823' 00:09:19.600 12:03:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 66823 00:09:19.600 12:03:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 66823 00:09:20.976 12:03:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:20.976 00:09:20.976 real 0m13.042s 00:09:20.976 user 0m16.919s 00:09:20.976 sys 0m5.223s 00:09:20.976 12:03:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.976 ************************************ 00:09:20.976 END TEST bdev_nbd 00:09:20.976 12:03:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:20.976 ************************************ 00:09:20.976 12:03:08 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:20.976 12:03:08 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:09:20.976 12:03:08 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:09:20.976 12:03:08 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:20.976 skipping fio tests on NVMe due to multi-ns failures. 00:09:20.976 12:03:08 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:20.976 12:03:08 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:20.976 12:03:08 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:09:20.976 12:03:08 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.976 12:03:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:20.976 ************************************ 00:09:20.976 START TEST bdev_verify 00:09:20.976 ************************************ 00:09:20.976 12:03:08 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:21.235 [2024-07-26 12:03:09.024086] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:09:21.235 [2024-07-26 12:03:09.024230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67261 ] 00:09:21.235 [2024-07-26 12:03:09.197315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:21.493 [2024-07-26 12:03:09.447920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.493 [2024-07-26 12:03:09.447959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.424 Running I/O for 5 seconds... 00:09:27.691 00:09:27.691 Latency(us) 00:09:27.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.691 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.691 Verification LBA range: start 0x0 length 0xbd0bd 00:09:27.691 Nvme0n1 : 5.06 1504.56 5.88 0.00 0.00 84770.57 7685.35 81275.17 00:09:27.691 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.691 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:27.691 Nvme0n1 : 5.06 1519.01 5.93 0.00 0.00 84000.17 20318.79 88855.24 00:09:27.691 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.691 Verification LBA range: start 0x0 length 0x4ff80 00:09:27.691 Nvme1n1p1 : 5.06 1503.67 5.87 0.00 0.00 84687.50 8948.69 75800.67 00:09:27.692 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.692 Verification LBA range: start 0x4ff80 length 0x4ff80 00:09:27.692 Nvme1n1p1 : 5.06 1518.40 5.93 0.00 0.00 83845.23 22950.76 79590.71 00:09:27.692 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.692 Verification LBA range: start 0x0 length 0x4ff7f 00:09:27.692 Nvme1n1p2 : 5.07 1503.28 5.87 0.00 0.00 84577.11 8738.13 75379.56 00:09:27.692 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.692 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:09:27.692 Nvme1n1p2 : 5.06 1517.81 5.93 0.00 0.00 83743.71 24529.94 76221.79 00:09:27.692 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.692 Verification LBA range: start 0x0 length 0x80000 00:09:27.692 Nvme2n1 : 5.07 1502.94 5.87 0.00 0.00 84335.78 8843.41 76642.90 00:09:27.692 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.692 Verification LBA range: start 0x80000 length 0x80000 00:09:27.692 Nvme2n1 : 5.06 1517.32 5.93 0.00 0.00 83629.20 23898.27 77906.25 00:09:27.692 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.692 Verification LBA range: start 0x0 length 0x80000 00:09:27.692 Nvme2n2 : 5.08 1512.58 5.91 0.00 0.00 83801.23 7106.31 79169.59 00:09:27.692 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.692 Verification LBA range: start 0x80000 length 0x80000 00:09:27.692 Nvme2n2 : 5.08 1525.66 5.96 0.00 0.00 83090.32 6211.44 80854.05 00:09:27.692 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.692 Verification LBA range: start 0x0 length 0x80000 00:09:27.692 Nvme2n3 : 5.08 1511.86 5.91 0.00 0.00 83689.10 7895.90 79590.71 00:09:27.692 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.692 Verification LBA range: start 0x80000 length 0x80000 00:09:27.692 Nvme2n3 : 5.08 1525.27 5.96 0.00 0.00 82973.37 5553.45 82117.40 00:09:27.692 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:27.692 Verification LBA range: start 0x0 length 0x20000 00:09:27.692 Nvme3n1 : 5.08 1511.51 5.90 0.00 0.00 83589.40 7369.51 80011.82 00:09:27.692 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:27.692 Verification LBA range: start 0x20000 length 0x20000 00:09:27.692 Nvme3n1 : 5.09 1534.83 6.00 0.00 0.00 82399.40 6685.20 81275.17 00:09:27.692 =================================================================================================================== 00:09:27.692 Total : 21208.69 82.85 0.00 0.00 83790.68 5553.45 88855.24 00:09:29.066 00:09:29.066 real 0m8.035s 00:09:29.066 user 0m14.588s 00:09:29.066 sys 0m0.329s 00:09:29.066 12:03:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.066 12:03:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:29.066 ************************************ 00:09:29.066 END TEST bdev_verify 00:09:29.066 ************************************ 00:09:29.066 12:03:17 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:29.066 12:03:17 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:09:29.066 12:03:17 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.066 12:03:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:29.066 ************************************ 00:09:29.066 START TEST bdev_verify_big_io 00:09:29.066 ************************************ 00:09:29.066 12:03:17 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:29.325 [2024-07-26 12:03:17.128333] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:09:29.325 [2024-07-26 12:03:17.128459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67364 ] 00:09:29.325 [2024-07-26 12:03:17.298697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.582 [2024-07-26 12:03:17.542115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.582 [2024-07-26 12:03:17.542181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.523 Running I/O for 5 seconds... 00:09:37.082 00:09:37.082 Latency(us) 00:09:37.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.082 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:37.082 Verification LBA range: start 0x0 length 0xbd0b 00:09:37.082 Nvme0n1 : 5.55 136.88 8.56 0.00 0.00 897763.96 28214.70 923083.77 00:09:37.082 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:37.082 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:37.082 Nvme0n1 : 5.67 126.36 7.90 0.00 0.00 975264.59 16844.59 1327354.04 00:09:37.082 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:37.082 Verification LBA range: start 0x0 length 0x4ff8 00:09:37.082 Nvme1n1p1 : 5.68 136.59 8.54 0.00 0.00 882091.41 46322.63 875918.91 00:09:37.082 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:37.082 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:37.082 Nvme1n1p1 : 5.74 134.10 8.38 0.00 0.00 904055.74 26846.07 1165645.93 00:09:37.082 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:37.082 Verification LBA range: start 0x0 length 0x4ff7 00:09:37.082 Nvme1n1p2 : 5.84 90.35 5.65 0.00 0.00 1310402.97 104436.49 1664245.92 00:09:37.082 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:37.082 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:37.082 Nvme1n1p2 : 5.79 136.56 8.53 0.00 0.00 861949.31 47164.86 963510.80 00:09:37.082 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:37.082 Verification LBA range: start 0x0 length 0x8000 00:09:37.082 Nvme2n1 : 5.79 143.10 8.94 0.00 0.00 808910.09 58534.97 875918.91 00:09:37.082 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:37.082 Verification LBA range: start 0x8000 length 0x8000 00:09:37.082 Nvme2n1 : 5.85 134.22 8.39 0.00 0.00 855129.11 46954.31 1421683.77 00:09:37.082 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:37.082 Verification LBA range: start 0x0 length 0x8000 00:09:37.082 Nvme2n2 : 5.84 148.68 9.29 0.00 0.00 763523.76 46112.08 811909.45 00:09:37.082 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:37.082 Verification LBA range: start 0x8000 length 0x8000 00:09:37.082 Nvme2n2 : 5.87 139.27 8.70 0.00 0.00 808149.96 53692.14 1448635.12 00:09:37.082 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:37.082 Verification LBA range: start 0x0 length 0x8000 00:09:37.082 Nvme2n3 : 5.85 153.24 9.58 0.00 0.00 728778.63 49059.88 832122.96 00:09:37.082 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:37.082 Verification LBA range: start 0x8000 length 0x8000 00:09:37.082 Nvme2n3 : 5.87 143.14 8.95 0.00 0.00 768138.07 14949.58 1462110.79 00:09:37.082 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:37.082 Verification LBA range: start 0x0 length 0x2000 00:09:37.082 Nvme3n1 : 5.86 163.87 10.24 0.00 0.00 668146.47 5527.13 848967.56 00:09:37.082 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:37.082 Verification LBA range: start 0x2000 length 0x2000 00:09:37.082 Nvme3n1 : 5.89 159.90 9.99 0.00 0.00 675745.15 4974.42 1293664.85 00:09:37.082 =================================================================================================================== 00:09:37.082 Total : 1946.25 121.64 0.00 0.00 831343.93 4974.42 1664245.92 00:09:38.982 00:09:38.982 real 0m9.499s 00:09:38.982 user 0m17.447s 00:09:38.982 sys 0m0.356s 00:09:38.982 12:03:26 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.982 12:03:26 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:38.982 ************************************ 00:09:38.982 END TEST bdev_verify_big_io 00:09:38.982 ************************************ 00:09:38.982 12:03:26 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.982 12:03:26 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:38.982 12:03:26 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.982 12:03:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:38.982 ************************************ 00:09:38.982 START TEST bdev_write_zeroes 00:09:38.982 ************************************ 00:09:38.982 12:03:26 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:38.982 [2024-07-26 12:03:26.670861] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:09:38.982 [2024-07-26 12:03:26.671031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67485 ] 00:09:38.982 [2024-07-26 12:03:26.845893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.239 [2024-07-26 12:03:27.093749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.189 Running I/O for 1 seconds... 00:09:41.123 00:09:41.123 Latency(us) 00:09:41.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.123 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:41.123 Nvme0n1 : 1.02 7600.80 29.69 0.00 0.00 16796.94 11054.27 106120.94 00:09:41.123 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:41.123 Nvme1n1p1 : 1.02 7661.82 29.93 0.00 0.00 16639.16 11264.82 93066.38 00:09:41.123 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:41.123 Nvme1n1p2 : 1.02 7652.04 29.89 0.00 0.00 16619.97 10580.51 93908.61 00:09:41.123 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:41.123 Nvme2n1 : 1.02 7642.83 29.85 0.00 0.00 16581.57 10738.43 94329.73 00:09:41.123 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:41.123 Nvme2n2 : 1.02 7634.02 29.82 0.00 0.00 16568.67 10527.87 94329.73 00:09:41.123 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:41.123 Nvme2n3 : 1.02 7625.97 29.79 0.00 0.00 16556.69 10685.79 94750.84 00:09:41.123 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:41.123 Nvme3n1 : 1.02 7618.07 29.76 0.00 0.00 16543.33 10791.07 94750.84 00:09:41.123 =================================================================================================================== 00:09:41.123 Total : 53435.55 208.73 0.00 0.00 16614.95 10527.87 106120.94 00:09:42.494 00:09:42.494 real 0m3.684s 00:09:42.494 user 0m3.279s 00:09:42.494 sys 0m0.283s 00:09:42.494 12:03:30 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.494 12:03:30 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:42.494 ************************************ 00:09:42.494 END TEST bdev_write_zeroes 00:09:42.494 ************************************ 00:09:42.494 12:03:30 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:42.494 12:03:30 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:42.494 12:03:30 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.494 12:03:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:42.494 ************************************ 00:09:42.494 START TEST bdev_json_nonenclosed 00:09:42.494 ************************************ 00:09:42.494 12:03:30 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:42.494 [2024-07-26 12:03:30.434246] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:09:42.494 [2024-07-26 12:03:30.434388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67543 ] 00:09:42.752 [2024-07-26 12:03:30.591775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.009 [2024-07-26 12:03:30.854560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.009 [2024-07-26 12:03:30.854667] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:43.009 [2024-07-26 12:03:30.854695] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:43.009 [2024-07-26 12:03:30.854712] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:43.617 00:09:43.617 real 0m0.999s 00:09:43.617 user 0m0.733s 00:09:43.617 sys 0m0.160s 00:09:43.617 12:03:31 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.617 12:03:31 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:43.617 ************************************ 00:09:43.617 END TEST bdev_json_nonenclosed 00:09:43.617 ************************************ 00:09:43.617 12:03:31 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:43.617 12:03:31 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:43.617 12:03:31 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.617 12:03:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:43.617 ************************************ 00:09:43.617 START TEST bdev_json_nonarray 00:09:43.617 ************************************ 00:09:43.617 12:03:31 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:43.617 [2024-07-26 12:03:31.513521] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:09:43.617 [2024-07-26 12:03:31.513681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67574 ] 00:09:43.874 [2024-07-26 12:03:31.688510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.131 [2024-07-26 12:03:31.926622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.131 [2024-07-26 12:03:31.926740] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:44.131 [2024-07-26 12:03:31.926766] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:44.131 [2024-07-26 12:03:31.926783] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:44.699 00:09:44.699 real 0m0.991s 00:09:44.699 user 0m0.727s 00:09:44.699 sys 0m0.157s 00:09:44.699 12:03:32 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.699 12:03:32 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:44.699 ************************************ 00:09:44.699 END TEST bdev_json_nonarray 00:09:44.699 ************************************ 00:09:44.699 12:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:09:44.699 12:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:09:44.699 12:03:32 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:44.699 12:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:44.699 12:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.699 12:03:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:44.699 ************************************ 00:09:44.699 START TEST bdev_gpt_uuid 00:09:44.699 ************************************ 00:09:44.699 12:03:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:09:44.699 12:03:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:09:44.699 12:03:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:09:44.699 12:03:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:44.699 12:03:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67605 00:09:44.699 12:03:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:44.699 12:03:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 67605 00:09:44.699 12:03:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 67605 ']' 00:09:44.699 12:03:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.699 12:03:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.699 12:03:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.699 12:03:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.699 12:03:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:44.699 [2024-07-26 12:03:32.595376] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:09:44.699 [2024-07-26 12:03:32.595518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67605 ] 00:09:44.957 [2024-07-26 12:03:32.766296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.216 [2024-07-26 12:03:32.999245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.150 12:03:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.150 12:03:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:09:46.150 12:03:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:46.150 12:03:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.150 12:03:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:46.407 Some configs were skipped because the RPC state that can call them passed over. 00:09:46.407 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.407 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:09:46.407 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.407 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:46.407 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.407 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:46.407 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.407 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:46.407 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.407 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:09:46.407 { 00:09:46.407 "name": "Nvme1n1p1", 00:09:46.407 "aliases": [ 00:09:46.407 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:46.407 ], 00:09:46.407 "product_name": "GPT Disk", 00:09:46.407 "block_size": 4096, 00:09:46.407 "num_blocks": 655104, 00:09:46.407 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:46.407 "assigned_rate_limits": { 00:09:46.408 "rw_ios_per_sec": 0, 00:09:46.408 "rw_mbytes_per_sec": 0, 00:09:46.408 "r_mbytes_per_sec": 0, 00:09:46.408 "w_mbytes_per_sec": 0 00:09:46.408 }, 00:09:46.408 "claimed": false, 00:09:46.408 "zoned": false, 00:09:46.408 "supported_io_types": { 00:09:46.408 "read": true, 00:09:46.408 "write": true, 00:09:46.408 "unmap": true, 00:09:46.408 "flush": true, 00:09:46.408 "reset": true, 00:09:46.408 "nvme_admin": false, 00:09:46.408 "nvme_io": false, 00:09:46.408 "nvme_io_md": false, 00:09:46.408 "write_zeroes": true, 00:09:46.408 "zcopy": false, 00:09:46.408 "get_zone_info": false, 00:09:46.408 "zone_management": false, 00:09:46.408 "zone_append": false, 00:09:46.408 "compare": true, 00:09:46.408 "compare_and_write": false, 00:09:46.408 "abort": true, 00:09:46.408 "seek_hole": false, 00:09:46.408 "seek_data": false, 00:09:46.408 "copy": true, 00:09:46.408 "nvme_iov_md": false 00:09:46.408 }, 00:09:46.408 "driver_specific": { 00:09:46.408 "gpt": { 00:09:46.408 "base_bdev": "Nvme1n1", 00:09:46.408 "offset_blocks": 256, 00:09:46.408 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:46.408 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:46.408 "partition_name": "SPDK_TEST_first" 00:09:46.408 } 00:09:46.408 } 00:09:46.408 } 00:09:46.408 ]' 00:09:46.408 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:09:46.408 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:09:46.408 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:09:46.665 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:46.665 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:46.665 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:46.665 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:46.665 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.665 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:46.665 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.665 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:09:46.665 { 00:09:46.665 "name": "Nvme1n1p2", 00:09:46.665 "aliases": [ 00:09:46.665 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:46.665 ], 00:09:46.665 "product_name": "GPT Disk", 00:09:46.665 "block_size": 4096, 00:09:46.665 "num_blocks": 655103, 00:09:46.665 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:46.665 "assigned_rate_limits": { 00:09:46.665 "rw_ios_per_sec": 0, 00:09:46.666 "rw_mbytes_per_sec": 0, 00:09:46.666 "r_mbytes_per_sec": 0, 00:09:46.666 "w_mbytes_per_sec": 0 00:09:46.666 }, 00:09:46.666 "claimed": false, 00:09:46.666 "zoned": false, 00:09:46.666 "supported_io_types": { 00:09:46.666 "read": true, 00:09:46.666 "write": true, 00:09:46.666 "unmap": true, 00:09:46.666 "flush": true, 00:09:46.666 "reset": true, 00:09:46.666 "nvme_admin": false, 00:09:46.666 "nvme_io": false, 00:09:46.666 "nvme_io_md": false, 00:09:46.666 "write_zeroes": true, 00:09:46.666 "zcopy": false, 00:09:46.666 "get_zone_info": false, 00:09:46.666 "zone_management": false, 00:09:46.666 "zone_append": false, 00:09:46.666 "compare": true, 00:09:46.666 "compare_and_write": false, 00:09:46.666 "abort": true, 00:09:46.666 "seek_hole": false, 00:09:46.666 "seek_data": false, 00:09:46.666 "copy": true, 00:09:46.666 "nvme_iov_md": false 00:09:46.666 }, 00:09:46.666 "driver_specific": { 00:09:46.666 "gpt": { 00:09:46.666 "base_bdev": "Nvme1n1", 00:09:46.666 "offset_blocks": 655360, 00:09:46.666 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:46.666 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:46.666 "partition_name": "SPDK_TEST_second" 00:09:46.666 } 00:09:46.666 } 00:09:46.666 } 00:09:46.666 ]' 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 67605 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 67605 ']' 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 67605 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67605 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.666 killing process with pid 67605 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67605' 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 67605 00:09:46.666 12:03:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 67605 00:09:49.197 ************************************ 00:09:49.197 END TEST bdev_gpt_uuid 00:09:49.197 ************************************ 00:09:49.197 00:09:49.197 real 0m4.627s 00:09:49.197 user 0m4.688s 00:09:49.197 sys 0m0.547s 00:09:49.197 12:03:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.197 12:03:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:49.197 12:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:09:49.197 12:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:49.197 12:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:09:49.197 12:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:49.197 12:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:49.455 12:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:49.455 12:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:49.455 12:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:49.455 12:03:37 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:49.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.279 Waiting for block devices as requested 00:09:50.279 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.279 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.279 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.537 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.803 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:55.803 12:03:43 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:55.803 12:03:43 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:55.803 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:55.803 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:55.803 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:55.803 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:55.803 12:03:43 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:55.803 00:09:55.803 real 1m8.523s 00:09:55.803 user 1m24.591s 00:09:55.803 sys 0m11.938s 00:09:55.803 ************************************ 00:09:55.803 END TEST blockdev_nvme_gpt 00:09:55.803 ************************************ 00:09:55.803 12:03:43 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.803 12:03:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:56.061 12:03:43 -- spdk/autotest.sh@220 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:56.061 12:03:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:56.061 12:03:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.061 12:03:43 -- common/autotest_common.sh@10 -- # set +x 00:09:56.061 ************************************ 00:09:56.061 START TEST nvme 00:09:56.061 ************************************ 00:09:56.061 12:03:43 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:56.061 * Looking for test storage... 00:09:56.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:56.061 12:03:43 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:56.993 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:57.559 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.559 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.559 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.559 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.822 12:03:45 nvme -- nvme/nvme.sh@79 -- # uname 00:09:57.822 12:03:45 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:57.822 12:03:45 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:57.822 12:03:45 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:57.822 12:03:45 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:57.822 12:03:45 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:09:57.822 12:03:45 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:09:57.822 12:03:45 nvme -- common/autotest_common.sh@1071 -- # stubpid=68264 00:09:57.822 12:03:45 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:57.822 Waiting for stub to ready for secondary processes... 00:09:57.822 12:03:45 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:09:57.822 12:03:45 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:57.822 12:03:45 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/68264 ]] 00:09:57.822 12:03:45 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:09:57.822 [2024-07-26 12:03:45.669955] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:09:57.822 [2024-07-26 12:03:45.670635] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:58.757 12:03:46 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:58.757 12:03:46 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/68264 ]] 00:09:58.757 12:03:46 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:09:58.757 [2024-07-26 12:03:46.692539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:59.025 [2024-07-26 12:03:46.927551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.025 [2024-07-26 12:03:46.927698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.025 [2024-07-26 12:03:46.927729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.025 [2024-07-26 12:03:46.947658] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:59.025 [2024-07-26 12:03:46.947699] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:59.025 [2024-07-26 12:03:46.961731] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:59.025 [2024-07-26 12:03:46.961859] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:59.025 [2024-07-26 12:03:46.966106] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:59.025 [2024-07-26 12:03:46.966400] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:59.025 [2024-07-26 12:03:46.966506] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:59.025 [2024-07-26 12:03:46.971054] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:59.025 [2024-07-26 12:03:46.971304] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:59.025 [2024-07-26 12:03:46.971411] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:59.025 [2024-07-26 12:03:46.975856] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:59.025 [2024-07-26 12:03:46.976054] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:59.025 [2024-07-26 12:03:46.976145] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:59.025 [2024-07-26 12:03:46.976219] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:59.025 [2024-07-26 12:03:46.976274] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:00.028 12:03:47 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:00.028 done. 00:10:00.028 12:03:47 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:10:00.028 12:03:47 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:00.028 12:03:47 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:10:00.028 12:03:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.028 12:03:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:00.028 ************************************ 00:10:00.028 START TEST nvme_reset 00:10:00.028 ************************************ 00:10:00.028 12:03:47 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:00.028 Initializing NVMe Controllers 00:10:00.028 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:00.028 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:00.028 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:00.028 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:00.028 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:00.028 00:10:00.028 real 0m0.269s 00:10:00.028 user 0m0.092s 00:10:00.028 sys 0m0.134s 00:10:00.028 ************************************ 00:10:00.028 END TEST nvme_reset 00:10:00.028 ************************************ 00:10:00.028 12:03:47 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.028 12:03:47 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:00.028 12:03:47 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:00.028 12:03:47 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:00.028 12:03:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.028 12:03:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:00.028 ************************************ 00:10:00.028 START TEST nvme_identify 00:10:00.028 ************************************ 00:10:00.028 12:03:47 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:10:00.028 12:03:47 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:00.028 12:03:47 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:00.028 12:03:47 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:00.028 12:03:47 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:00.028 12:03:47 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:00.028 12:03:47 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:10:00.028 12:03:47 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:00.028 12:03:47 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:00.028 12:03:47 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:00.288 12:03:48 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:00.288 12:03:48 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:00.288 12:03:48 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:00.550 [2024-07-26 12:03:48.272373] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 68298 terminated unexpected 00:10:00.550 ===================================================== 00:10:00.550 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:00.550 ===================================================== 00:10:00.550 Controller Capabilities/Features 00:10:00.550 ================================ 00:10:00.550 Vendor ID: 1b36 00:10:00.550 Subsystem Vendor ID: 1af4 00:10:00.550 Serial Number: 12340 00:10:00.550 Model Number: QEMU NVMe Ctrl 00:10:00.550 Firmware Version: 8.0.0 00:10:00.550 Recommended Arb Burst: 6 00:10:00.550 IEEE OUI Identifier: 00 54 52 00:10:00.550 Multi-path I/O 00:10:00.550 May have multiple subsystem ports: No 00:10:00.550 May have multiple controllers: No 00:10:00.550 Associated with SR-IOV VF: No 00:10:00.550 Max Data Transfer Size: 524288 00:10:00.550 Max Number of Namespaces: 256 00:10:00.550 Max Number of I/O Queues: 64 00:10:00.550 NVMe Specification Version (VS): 1.4 00:10:00.550 NVMe Specification Version (Identify): 1.4 00:10:00.550 Maximum Queue Entries: 2048 00:10:00.550 Contiguous Queues Required: Yes 00:10:00.550 Arbitration Mechanisms Supported 00:10:00.550 Weighted Round Robin: Not Supported 00:10:00.550 Vendor Specific: Not Supported 00:10:00.550 Reset Timeout: 7500 ms 00:10:00.550 Doorbell Stride: 4 bytes 00:10:00.550 NVM Subsystem Reset: Not Supported 00:10:00.550 Command Sets Supported 00:10:00.550 NVM Command Set: Supported 00:10:00.550 Boot Partition: Not Supported 00:10:00.550 Memory Page Size Minimum: 4096 bytes 00:10:00.550 Memory Page Size Maximum: 65536 bytes 00:10:00.550 Persistent Memory Region: Not Supported 00:10:00.550 Optional Asynchronous Events Supported 00:10:00.550 Namespace Attribute Notices: Supported 00:10:00.550 Firmware Activation Notices: Not Supported 00:10:00.550 ANA Change Notices: Not Supported 00:10:00.550 PLE Aggregate Log Change Notices: Not Supported 00:10:00.550 LBA Status Info Alert Notices: Not Supported 00:10:00.550 EGE Aggregate Log Change Notices: Not Supported 00:10:00.550 Normal NVM Subsystem Shutdown event: Not Supported 00:10:00.550 Zone Descriptor Change Notices: Not Supported 00:10:00.550 Discovery Log Change Notices: Not Supported 00:10:00.550 Controller Attributes 00:10:00.550 128-bit Host Identifier: Not Supported 00:10:00.550 Non-Operational Permissive Mode: Not Supported 00:10:00.550 NVM Sets: Not Supported 00:10:00.550 Read Recovery Levels: Not Supported 00:10:00.550 Endurance Groups: Not Supported 00:10:00.550 Predictable Latency Mode: Not Supported 00:10:00.550 Traffic Based Keep ALive: Not Supported 00:10:00.550 Namespace Granularity: Not Supported 00:10:00.550 SQ Associations: Not Supported 00:10:00.550 UUID List: Not Supported 00:10:00.550 Multi-Domain Subsystem: Not Supported 00:10:00.550 Fixed Capacity Management: Not Supported 00:10:00.550 Variable Capacity Management: Not Supported 00:10:00.550 Delete Endurance Group: Not Supported 00:10:00.550 Delete NVM Set: Not Supported 00:10:00.550 Extended LBA Formats Supported: Supported 00:10:00.550 Flexible Data Placement Supported: Not Supported 00:10:00.550 00:10:00.550 Controller Memory Buffer Support 00:10:00.550 ================================ 00:10:00.550 Supported: No 00:10:00.550 00:10:00.550 Persistent Memory Region Support 00:10:00.550 ================================ 00:10:00.550 Supported: No 00:10:00.550 00:10:00.550 Admin Command Set Attributes 00:10:00.550 ============================ 00:10:00.550 Security Send/Receive: Not Supported 00:10:00.550 Format NVM: Supported 00:10:00.550 Firmware Activate/Download: Not Supported 00:10:00.550 Namespace Management: Supported 00:10:00.550 Device Self-Test: Not Supported 00:10:00.550 Directives: Supported 00:10:00.550 NVMe-MI: Not Supported 00:10:00.550 Virtualization Management: Not Supported 00:10:00.550 Doorbell Buffer Config: Supported 00:10:00.550 Get LBA Status Capability: Not Supported 00:10:00.550 Command & Feature Lockdown Capability: Not Supported 00:10:00.550 Abort Command Limit: 4 00:10:00.550 Async Event Request Limit: 4 00:10:00.550 Number of Firmware Slots: N/A 00:10:00.550 Firmware Slot 1 Read-Only: N/A 00:10:00.550 Firmware Activation Without Reset: N/A 00:10:00.550 Multiple Update Detection Support: N/A 00:10:00.550 Firmware Update Granularity: No Information Provided 00:10:00.550 Per-Namespace SMART Log: Yes 00:10:00.550 Asymmetric Namespace Access Log Page: Not Supported 00:10:00.550 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:00.550 Command Effects Log Page: Supported 00:10:00.550 Get Log Page Extended Data: Supported 00:10:00.550 Telemetry Log Pages: Not Supported 00:10:00.550 Persistent Event Log Pages: Not Supported 00:10:00.550 Supported Log Pages Log Page: May Support 00:10:00.551 Commands Supported & Effects Log Page: Not Supported 00:10:00.551 Feature Identifiers & Effects Log Page:May Support 00:10:00.551 NVMe-MI Commands & Effects Log Page: May Support 00:10:00.551 Data Area 4 for Telemetry Log: Not Supported 00:10:00.551 Error Log Page Entries Supported: 1 00:10:00.551 Keep Alive: Not Supported 00:10:00.551 00:10:00.551 NVM Command Set Attributes 00:10:00.551 ========================== 00:10:00.551 Submission Queue Entry Size 00:10:00.551 Max: 64 00:10:00.551 Min: 64 00:10:00.551 Completion Queue Entry Size 00:10:00.551 Max: 16 00:10:00.551 Min: 16 00:10:00.551 Number of Namespaces: 256 00:10:00.551 Compare Command: Supported 00:10:00.551 Write Uncorrectable Command: Not Supported 00:10:00.551 Dataset Management Command: Supported 00:10:00.551 Write Zeroes Command: Supported 00:10:00.551 Set Features Save Field: Supported 00:10:00.551 Reservations: Not Supported 00:10:00.551 Timestamp: Supported 00:10:00.551 Copy: Supported 00:10:00.551 Volatile Write Cache: Present 00:10:00.551 Atomic Write Unit (Normal): 1 00:10:00.551 Atomic Write Unit (PFail): 1 00:10:00.551 Atomic Compare & Write Unit: 1 00:10:00.551 Fused Compare & Write: Not Supported 00:10:00.551 Scatter-Gather List 00:10:00.551 SGL Command Set: Supported 00:10:00.551 SGL Keyed: Not Supported 00:10:00.551 SGL Bit Bucket Descriptor: Not Supported 00:10:00.551 SGL Metadata Pointer: Not Supported 00:10:00.551 Oversized SGL: Not Supported 00:10:00.551 SGL Metadata Address: Not Supported 00:10:00.551 SGL Offset: Not Supported 00:10:00.551 Transport SGL Data Block: Not Supported 00:10:00.551 Replay Protected Memory Block: Not Supported 00:10:00.551 00:10:00.551 Firmware Slot Information 00:10:00.551 ========================= 00:10:00.551 Active slot: 1 00:10:00.551 Slot 1 Firmware Revision: 1.0 00:10:00.551 00:10:00.551 00:10:00.551 Commands Supported and Effects 00:10:00.551 ============================== 00:10:00.551 Admin Commands 00:10:00.551 -------------- 00:10:00.551 Delete I/O Submission Queue (00h): Supported 00:10:00.551 Create I/O Submission Queue (01h): Supported 00:10:00.551 Get Log Page (02h): Supported 00:10:00.551 Delete I/O Completion Queue (04h): Supported 00:10:00.551 Create I/O Completion Queue (05h): Supported 00:10:00.551 Identify (06h): Supported 00:10:00.551 Abort (08h): Supported 00:10:00.551 Set Features (09h): Supported 00:10:00.551 Get Features (0Ah): Supported 00:10:00.551 Asynchronous Event Request (0Ch): Supported 00:10:00.551 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:00.551 Directive Send (19h): Supported 00:10:00.551 Directive Receive (1Ah): Supported 00:10:00.551 Virtualization Management (1Ch): Supported 00:10:00.551 Doorbell Buffer Config (7Ch): Supported 00:10:00.551 Format NVM (80h): Supported LBA-Change 00:10:00.551 I/O Commands 00:10:00.551 ------------ 00:10:00.551 Flush (00h): Supported LBA-Change 00:10:00.551 Write (01h): Supported LBA-Change 00:10:00.551 Read (02h): Supported 00:10:00.551 Compare (05h): Supported 00:10:00.551 Write Zeroes (08h): Supported LBA-Change 00:10:00.551 Dataset Management (09h): Supported LBA-Change 00:10:00.551 Unknown (0Ch): Supported 00:10:00.551 Unknown (12h): Supported 00:10:00.551 Copy (19h): Supported LBA-Change 00:10:00.551 Unknown (1Dh): Supported LBA-Change 00:10:00.551 00:10:00.551 Error Log 00:10:00.551 ========= 00:10:00.551 00:10:00.551 Arbitration 00:10:00.551 =========== 00:10:00.551 Arbitration Burst: no limit 00:10:00.551 00:10:00.551 Power Management 00:10:00.551 ================ 00:10:00.551 Number of Power States: 1 00:10:00.551 Current Power State: Power State #0 00:10:00.551 Power State #0: 00:10:00.551 Max Power: 25.00 W 00:10:00.551 Non-Operational State: Operational 00:10:00.551 Entry Latency: 16 microseconds 00:10:00.551 Exit Latency: 4 microseconds 00:10:00.551 Relative Read Throughput: 0 00:10:00.551 Relative Read Latency: 0 00:10:00.551 Relative Write Throughput: 0 00:10:00.551 Relative Write Latency: 0 00:10:00.551 Idle Power[2024-07-26 12:03:48.273746] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 68298 terminated unexpected 00:10:00.551 : Not Reported 00:10:00.551 Active Power: Not Reported 00:10:00.551 Non-Operational Permissive Mode: Not Supported 00:10:00.551 00:10:00.551 Health Information 00:10:00.551 ================== 00:10:00.551 Critical Warnings: 00:10:00.551 Available Spare Space: OK 00:10:00.551 Temperature: OK 00:10:00.551 Device Reliability: OK 00:10:00.551 Read Only: No 00:10:00.551 Volatile Memory Backup: OK 00:10:00.551 Current Temperature: 323 Kelvin (50 Celsius) 00:10:00.551 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:00.551 Available Spare: 0% 00:10:00.551 Available Spare Threshold: 0% 00:10:00.551 Life Percentage Used: 0% 00:10:00.551 Data Units Read: 791 00:10:00.551 Data Units Written: 683 00:10:00.551 Host Read Commands: 38763 00:10:00.551 Host Write Commands: 37801 00:10:00.551 Controller Busy Time: 0 minutes 00:10:00.551 Power Cycles: 0 00:10:00.551 Power On Hours: 0 hours 00:10:00.551 Unsafe Shutdowns: 0 00:10:00.551 Unrecoverable Media Errors: 0 00:10:00.551 Lifetime Error Log Entries: 0 00:10:00.551 Warning Temperature Time: 0 minutes 00:10:00.551 Critical Temperature Time: 0 minutes 00:10:00.551 00:10:00.551 Number of Queues 00:10:00.551 ================ 00:10:00.551 Number of I/O Submission Queues: 64 00:10:00.551 Number of I/O Completion Queues: 64 00:10:00.551 00:10:00.551 ZNS Specific Controller Data 00:10:00.551 ============================ 00:10:00.551 Zone Append Size Limit: 0 00:10:00.551 00:10:00.551 00:10:00.551 Active Namespaces 00:10:00.551 ================= 00:10:00.551 Namespace ID:1 00:10:00.551 Error Recovery Timeout: Unlimited 00:10:00.551 Command Set Identifier: NVM (00h) 00:10:00.551 Deallocate: Supported 00:10:00.551 Deallocated/Unwritten Error: Supported 00:10:00.551 Deallocated Read Value: All 0x00 00:10:00.551 Deallocate in Write Zeroes: Not Supported 00:10:00.551 Deallocated Guard Field: 0xFFFF 00:10:00.551 Flush: Supported 00:10:00.551 Reservation: Not Supported 00:10:00.551 Metadata Transferred as: Separate Metadata Buffer 00:10:00.551 Namespace Sharing Capabilities: Private 00:10:00.551 Size (in LBAs): 1548666 (5GiB) 00:10:00.551 Capacity (in LBAs): 1548666 (5GiB) 00:10:00.551 Utilization (in LBAs): 1548666 (5GiB) 00:10:00.551 Thin Provisioning: Not Supported 00:10:00.551 Per-NS Atomic Units: No 00:10:00.551 Maximum Single Source Range Length: 128 00:10:00.551 Maximum Copy Length: 128 00:10:00.551 Maximum Source Range Count: 128 00:10:00.551 NGUID/EUI64 Never Reused: No 00:10:00.551 Namespace Write Protected: No 00:10:00.551 Number of LBA Formats: 8 00:10:00.551 Current LBA Format: LBA Format #07 00:10:00.551 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:00.551 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:00.551 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:00.551 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:00.551 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:00.551 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:00.551 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:00.551 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:00.551 00:10:00.551 NVM Specific Namespace Data 00:10:00.551 =========================== 00:10:00.551 Logical Block Storage Tag Mask: 0 00:10:00.551 Protection Information Capabilities: 00:10:00.551 16b Guard Protection Information Storage Tag Support: No 00:10:00.551 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:00.552 Storage Tag Check Read Support: No 00:10:00.552 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.552 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.552 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.552 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.552 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.552 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.552 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.552 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.552 ===================================================== 00:10:00.552 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:00.552 ===================================================== 00:10:00.552 Controller Capabilities/Features 00:10:00.552 ================================ 00:10:00.552 Vendor ID: 1b36 00:10:00.552 Subsystem Vendor ID: 1af4 00:10:00.552 Serial Number: 12341 00:10:00.552 Model Number: QEMU NVMe Ctrl 00:10:00.552 Firmware Version: 8.0.0 00:10:00.552 Recommended Arb Burst: 6 00:10:00.552 IEEE OUI Identifier: 00 54 52 00:10:00.552 Multi-path I/O 00:10:00.552 May have multiple subsystem ports: No 00:10:00.552 May have multiple controllers: No 00:10:00.552 Associated with SR-IOV VF: No 00:10:00.552 Max Data Transfer Size: 524288 00:10:00.552 Max Number of Namespaces: 256 00:10:00.552 Max Number of I/O Queues: 64 00:10:00.552 NVMe Specification Version (VS): 1.4 00:10:00.552 NVMe Specification Version (Identify): 1.4 00:10:00.552 Maximum Queue Entries: 2048 00:10:00.552 Contiguous Queues Required: Yes 00:10:00.552 Arbitration Mechanisms Supported 00:10:00.552 Weighted Round Robin: Not Supported 00:10:00.552 Vendor Specific: Not Supported 00:10:00.552 Reset Timeout: 7500 ms 00:10:00.552 Doorbell Stride: 4 bytes 00:10:00.552 NVM Subsystem Reset: Not Supported 00:10:00.552 Command Sets Supported 00:10:00.552 NVM Command Set: Supported 00:10:00.552 Boot Partition: Not Supported 00:10:00.552 Memory Page Size Minimum: 4096 bytes 00:10:00.552 Memory Page Size Maximum: 65536 bytes 00:10:00.552 Persistent Memory Region: Not Supported 00:10:00.552 Optional Asynchronous Events Supported 00:10:00.552 Namespace Attribute Notices: Supported 00:10:00.552 Firmware Activation Notices: Not Supported 00:10:00.552 ANA Change Notices: Not Supported 00:10:00.552 PLE Aggregate Log Change Notices: Not Supported 00:10:00.552 LBA Status Info Alert Notices: Not Supported 00:10:00.552 EGE Aggregate Log Change Notices: Not Supported 00:10:00.552 Normal NVM Subsystem Shutdown event: Not Supported 00:10:00.552 Zone Descriptor Change Notices: Not Supported 00:10:00.552 Discovery Log Change Notices: Not Supported 00:10:00.552 Controller Attributes 00:10:00.552 128-bit Host Identifier: Not Supported 00:10:00.552 Non-Operational Permissive Mode: Not Supported 00:10:00.552 NVM Sets: Not Supported 00:10:00.552 Read Recovery Levels: Not Supported 00:10:00.552 Endurance Groups: Not Supported 00:10:00.552 Predictable Latency Mode: Not Supported 00:10:00.552 Traffic Based Keep ALive: Not Supported 00:10:00.552 Namespace Granularity: Not Supported 00:10:00.552 SQ Associations: Not Supported 00:10:00.552 UUID List: Not Supported 00:10:00.552 Multi-Domain Subsystem: Not Supported 00:10:00.552 Fixed Capacity Management: Not Supported 00:10:00.552 Variable Capacity Management: Not Supported 00:10:00.552 Delete Endurance Group: Not Supported 00:10:00.552 Delete NVM Set: Not Supported 00:10:00.552 Extended LBA Formats Supported: Supported 00:10:00.552 Flexible Data Placement Supported: Not Supported 00:10:00.552 00:10:00.552 Controller Memory Buffer Support 00:10:00.552 ================================ 00:10:00.552 Supported: No 00:10:00.552 00:10:00.552 Persistent Memory Region Support 00:10:00.552 ================================ 00:10:00.552 Supported: No 00:10:00.552 00:10:00.552 Admin Command Set Attributes 00:10:00.552 ============================ 00:10:00.552 Security Send/Receive: Not Supported 00:10:00.552 Format NVM: Supported 00:10:00.552 Firmware Activate/Download: Not Supported 00:10:00.552 Namespace Management: Supported 00:10:00.552 Device Self-Test: Not Supported 00:10:00.552 Directives: Supported 00:10:00.552 NVMe-MI: Not Supported 00:10:00.552 Virtualization Management: Not Supported 00:10:00.552 Doorbell Buffer Config: Supported 00:10:00.552 Get LBA Status Capability: Not Supported 00:10:00.552 Command & Feature Lockdown Capability: Not Supported 00:10:00.552 Abort Command Limit: 4 00:10:00.552 Async Event Request Limit: 4 00:10:00.552 Number of Firmware Slots: N/A 00:10:00.552 Firmware Slot 1 Read-Only: N/A 00:10:00.552 Firmware Activation Without Reset: N/A 00:10:00.552 Multiple Update Detection Support: N/A 00:10:00.552 Firmware Update Granularity: No Information Provided 00:10:00.552 Per-Namespace SMART Log: Yes 00:10:00.552 Asymmetric Namespace Access Log Page: Not Supported 00:10:00.552 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:00.552 Command Effects Log Page: Supported 00:10:00.552 Get Log Page Extended Data: Supported 00:10:00.552 Telemetry Log Pages: Not Supported 00:10:00.552 Persistent Event Log Pages: Not Supported 00:10:00.552 Supported Log Pages Log Page: May Support 00:10:00.552 Commands Supported & Effects Log Page: Not Supported 00:10:00.552 Feature Identifiers & Effects Log Page:May Support 00:10:00.552 NVMe-MI Commands & Effects Log Page: May Support 00:10:00.552 Data Area 4 for Telemetry Log: Not Supported 00:10:00.552 Error Log Page Entries Supported: 1 00:10:00.552 Keep Alive: Not Supported 00:10:00.552 00:10:00.552 NVM Command Set Attributes 00:10:00.552 ========================== 00:10:00.552 Submission Queue Entry Size 00:10:00.552 Max: 64 00:10:00.552 Min: 64 00:10:00.552 Completion Queue Entry Size 00:10:00.552 Max: 16 00:10:00.552 Min: 16 00:10:00.552 Number of Namespaces: 256 00:10:00.552 Compare Command: Supported 00:10:00.552 Write Uncorrectable Command: Not Supported 00:10:00.552 Dataset Management Command: Supported 00:10:00.552 Write Zeroes Command: Supported 00:10:00.552 Set Features Save Field: Supported 00:10:00.552 Reservations: Not Supported 00:10:00.552 Timestamp: Supported 00:10:00.552 Copy: Supported 00:10:00.552 Volatile Write Cache: Present 00:10:00.552 Atomic Write Unit (Normal): 1 00:10:00.552 Atomic Write Unit (PFail): 1 00:10:00.552 Atomic Compare & Write Unit: 1 00:10:00.552 Fused Compare & Write: Not Supported 00:10:00.552 Scatter-Gather List 00:10:00.552 SGL Command Set: Supported 00:10:00.552 SGL Keyed: Not Supported 00:10:00.552 SGL Bit Bucket Descriptor: Not Supported 00:10:00.552 SGL Metadata Pointer: Not Supported 00:10:00.552 Oversized SGL: Not Supported 00:10:00.552 SGL Metadata Address: Not Supported 00:10:00.552 SGL Offset: Not Supported 00:10:00.552 Transport SGL Data Block: Not Supported 00:10:00.552 Replay Protected Memory Block: Not Supported 00:10:00.552 00:10:00.552 Firmware Slot Information 00:10:00.552 ========================= 00:10:00.552 Active slot: 1 00:10:00.552 Slot 1 Firmware Revision: 1.0 00:10:00.552 00:10:00.552 00:10:00.552 Commands Supported and Effects 00:10:00.552 ============================== 00:10:00.552 Admin Commands 00:10:00.552 -------------- 00:10:00.552 Delete I/O Submission Queue (00h): Supported 00:10:00.552 Create I/O Submission Queue (01h): Supported 00:10:00.552 Get Log Page (02h): Supported 00:10:00.552 Delete I/O Completion Queue (04h): Supported 00:10:00.552 Create I/O Completion Queue (05h): Supported 00:10:00.552 Identify (06h): Supported 00:10:00.552 Abort (08h): Supported 00:10:00.553 Set Features (09h): Supported 00:10:00.553 Get Features (0Ah): Supported 00:10:00.553 Asynchronous Event Request (0Ch): Supported 00:10:00.553 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:00.553 Directive Send (19h): Supported 00:10:00.553 Directive Receive (1Ah): Supported 00:10:00.553 Virtualization Management (1Ch): Supported 00:10:00.553 Doorbell Buffer Config (7Ch): Supported 00:10:00.553 Format NVM (80h): Supported LBA-Change 00:10:00.553 I/O Commands 00:10:00.553 ------------ 00:10:00.553 Flush (00h): Supported LBA-Change 00:10:00.553 Write (01h): Supported LBA-Change 00:10:00.553 Read (02h): Supported 00:10:00.553 Compare (05h): Supported 00:10:00.553 Write Zeroes (08h): Supported LBA-Change 00:10:00.553 Dataset Management (09h): Supported LBA-Change 00:10:00.553 Unknown (0Ch): Supported 00:10:00.553 Unknown (12h): Supported 00:10:00.553 Copy (19h): Supported LBA-Change 00:10:00.553 Unknown (1Dh): Supported LBA-Change 00:10:00.553 00:10:00.553 Error Log 00:10:00.553 ========= 00:10:00.553 00:10:00.553 Arbitration 00:10:00.553 =========== 00:10:00.553 Arbitration Burst: no limit 00:10:00.553 00:10:00.553 Power Management 00:10:00.553 ================ 00:10:00.553 Number of Power States: 1 00:10:00.553 Current Power State: Power State #0 00:10:00.553 Power State #0: 00:10:00.553 Max Power: 25.00 W 00:10:00.553 Non-Operational State: Operational 00:10:00.553 Entry Latency: 16 microseconds 00:10:00.553 Exit Latency: 4 microseconds 00:10:00.553 Relative Read Throughput: 0 00:10:00.553 Relative Read Latency: 0 00:10:00.553 Relative Write Throughput: 0 00:10:00.553 Relative Write Latency: 0 00:10:00.553 Idle Power: Not Reported 00:10:00.553 Active Power: Not Reported 00:10:00.553 Non-Operational Permissive Mode: Not Supported 00:10:00.553 00:10:00.553 Health Information 00:10:00.553 ================== 00:10:00.553 Critical Warnings: 00:10:00.553 Available Spare Space: OK 00:10:00.553 Temperature: [2024-07-26 12:03:48.274443] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 68298 terminated unexpected 00:10:00.553 OK 00:10:00.553 Device Reliability: OK 00:10:00.553 Read Only: No 00:10:00.553 Volatile Memory Backup: OK 00:10:00.553 Current Temperature: 323 Kelvin (50 Celsius) 00:10:00.553 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:00.553 Available Spare: 0% 00:10:00.553 Available Spare Threshold: 0% 00:10:00.553 Life Percentage Used: 0% 00:10:00.553 Data Units Read: 1211 00:10:00.553 Data Units Written: 995 00:10:00.553 Host Read Commands: 57715 00:10:00.553 Host Write Commands: 54761 00:10:00.553 Controller Busy Time: 0 minutes 00:10:00.553 Power Cycles: 0 00:10:00.553 Power On Hours: 0 hours 00:10:00.553 Unsafe Shutdowns: 0 00:10:00.553 Unrecoverable Media Errors: 0 00:10:00.553 Lifetime Error Log Entries: 0 00:10:00.553 Warning Temperature Time: 0 minutes 00:10:00.553 Critical Temperature Time: 0 minutes 00:10:00.553 00:10:00.553 Number of Queues 00:10:00.553 ================ 00:10:00.553 Number of I/O Submission Queues: 64 00:10:00.553 Number of I/O Completion Queues: 64 00:10:00.553 00:10:00.553 ZNS Specific Controller Data 00:10:00.553 ============================ 00:10:00.553 Zone Append Size Limit: 0 00:10:00.553 00:10:00.553 00:10:00.553 Active Namespaces 00:10:00.553 ================= 00:10:00.553 Namespace ID:1 00:10:00.553 Error Recovery Timeout: Unlimited 00:10:00.553 Command Set Identifier: NVM (00h) 00:10:00.553 Deallocate: Supported 00:10:00.553 Deallocated/Unwritten Error: Supported 00:10:00.553 Deallocated Read Value: All 0x00 00:10:00.553 Deallocate in Write Zeroes: Not Supported 00:10:00.553 Deallocated Guard Field: 0xFFFF 00:10:00.553 Flush: Supported 00:10:00.553 Reservation: Not Supported 00:10:00.553 Namespace Sharing Capabilities: Private 00:10:00.553 Size (in LBAs): 1310720 (5GiB) 00:10:00.553 Capacity (in LBAs): 1310720 (5GiB) 00:10:00.553 Utilization (in LBAs): 1310720 (5GiB) 00:10:00.553 Thin Provisioning: Not Supported 00:10:00.553 Per-NS Atomic Units: No 00:10:00.553 Maximum Single Source Range Length: 128 00:10:00.553 Maximum Copy Length: 128 00:10:00.553 Maximum Source Range Count: 128 00:10:00.553 NGUID/EUI64 Never Reused: No 00:10:00.553 Namespace Write Protected: No 00:10:00.553 Number of LBA Formats: 8 00:10:00.553 Current LBA Format: LBA Format #04 00:10:00.553 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:00.553 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:00.553 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:00.553 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:00.553 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:00.553 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:00.553 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:00.553 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:00.553 00:10:00.553 NVM Specific Namespace Data 00:10:00.553 =========================== 00:10:00.553 Logical Block Storage Tag Mask: 0 00:10:00.553 Protection Information Capabilities: 00:10:00.553 16b Guard Protection Information Storage Tag Support: No 00:10:00.553 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:00.553 Storage Tag Check Read Support: No 00:10:00.553 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.553 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.553 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.553 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.553 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.553 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.553 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.553 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.553 ===================================================== 00:10:00.553 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:00.553 ===================================================== 00:10:00.553 Controller Capabilities/Features 00:10:00.553 ================================ 00:10:00.553 Vendor ID: 1b36 00:10:00.553 Subsystem Vendor ID: 1af4 00:10:00.553 Serial Number: 12343 00:10:00.553 Model Number: QEMU NVMe Ctrl 00:10:00.553 Firmware Version: 8.0.0 00:10:00.553 Recommended Arb Burst: 6 00:10:00.553 IEEE OUI Identifier: 00 54 52 00:10:00.553 Multi-path I/O 00:10:00.553 May have multiple subsystem ports: No 00:10:00.553 May have multiple controllers: Yes 00:10:00.553 Associated with SR-IOV VF: No 00:10:00.553 Max Data Transfer Size: 524288 00:10:00.553 Max Number of Namespaces: 256 00:10:00.553 Max Number of I/O Queues: 64 00:10:00.553 NVMe Specification Version (VS): 1.4 00:10:00.553 NVMe Specification Version (Identify): 1.4 00:10:00.553 Maximum Queue Entries: 2048 00:10:00.553 Contiguous Queues Required: Yes 00:10:00.553 Arbitration Mechanisms Supported 00:10:00.553 Weighted Round Robin: Not Supported 00:10:00.553 Vendor Specific: Not Supported 00:10:00.553 Reset Timeout: 7500 ms 00:10:00.553 Doorbell Stride: 4 bytes 00:10:00.553 NVM Subsystem Reset: Not Supported 00:10:00.553 Command Sets Supported 00:10:00.553 NVM Command Set: Supported 00:10:00.553 Boot Partition: Not Supported 00:10:00.553 Memory Page Size Minimum: 4096 bytes 00:10:00.553 Memory Page Size Maximum: 65536 bytes 00:10:00.553 Persistent Memory Region: Not Supported 00:10:00.554 Optional Asynchronous Events Supported 00:10:00.554 Namespace Attribute Notices: Supported 00:10:00.554 Firmware Activation Notices: Not Supported 00:10:00.554 ANA Change Notices: Not Supported 00:10:00.554 PLE Aggregate Log Change Notices: Not Supported 00:10:00.554 LBA Status Info Alert Notices: Not Supported 00:10:00.554 EGE Aggregate Log Change Notices: Not Supported 00:10:00.554 Normal NVM Subsystem Shutdown event: Not Supported 00:10:00.554 Zone Descriptor Change Notices: Not Supported 00:10:00.554 Discovery Log Change Notices: Not Supported 00:10:00.554 Controller Attributes 00:10:00.554 128-bit Host Identifier: Not Supported 00:10:00.554 Non-Operational Permissive Mode: Not Supported 00:10:00.554 NVM Sets: Not Supported 00:10:00.554 Read Recovery Levels: Not Supported 00:10:00.554 Endurance Groups: Supported 00:10:00.554 Predictable Latency Mode: Not Supported 00:10:00.554 Traffic Based Keep ALive: Not Supported 00:10:00.554 Namespace Granularity: Not Supported 00:10:00.554 SQ Associations: Not Supported 00:10:00.554 UUID List: Not Supported 00:10:00.554 Multi-Domain Subsystem: Not Supported 00:10:00.554 Fixed Capacity Management: Not Supported 00:10:00.554 Variable Capacity Management: Not Supported 00:10:00.554 Delete Endurance Group: Not Supported 00:10:00.554 Delete NVM Set: Not Supported 00:10:00.554 Extended LBA Formats Supported: Supported 00:10:00.554 Flexible Data Placement Supported: Supported 00:10:00.554 00:10:00.554 Controller Memory Buffer Support 00:10:00.554 ================================ 00:10:00.554 Supported: No 00:10:00.554 00:10:00.554 Persistent Memory Region Support 00:10:00.554 ================================ 00:10:00.554 Supported: No 00:10:00.554 00:10:00.554 Admin Command Set Attributes 00:10:00.554 ============================ 00:10:00.554 Security Send/Receive: Not Supported 00:10:00.554 Format NVM: Supported 00:10:00.554 Firmware Activate/Download: Not Supported 00:10:00.554 Namespace Management: Supported 00:10:00.554 Device Self-Test: Not Supported 00:10:00.554 Directives: Supported 00:10:00.554 NVMe-MI: Not Supported 00:10:00.554 Virtualization Management: Not Supported 00:10:00.554 Doorbell Buffer Config: Supported 00:10:00.554 Get LBA Status Capability: Not Supported 00:10:00.554 Command & Feature Lockdown Capability: Not Supported 00:10:00.554 Abort Command Limit: 4 00:10:00.554 Async Event Request Limit: 4 00:10:00.554 Number of Firmware Slots: N/A 00:10:00.554 Firmware Slot 1 Read-Only: N/A 00:10:00.554 Firmware Activation Without Reset: N/A 00:10:00.554 Multiple Update Detection Support: N/A 00:10:00.554 Firmware Update Granularity: No Information Provided 00:10:00.554 Per-Namespace SMART Log: Yes 00:10:00.554 Asymmetric Namespace Access Log Page: Not Supported 00:10:00.554 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:00.554 Command Effects Log Page: Supported 00:10:00.554 Get Log Page Extended Data: Supported 00:10:00.554 Telemetry Log Pages: Not Supported 00:10:00.554 Persistent Event Log Pages: Not Supported 00:10:00.554 Supported Log Pages Log Page: May Support 00:10:00.554 Commands Supported & Effects Log Page: Not Supported 00:10:00.554 Feature Identifiers & Effects Log Page:May Support 00:10:00.554 NVMe-MI Commands & Effects Log Page: May Support 00:10:00.554 Data Area 4 for Telemetry Log: Not Supported 00:10:00.554 Error Log Page Entries Supported: 1 00:10:00.554 Keep Alive: Not Supported 00:10:00.554 00:10:00.554 NVM Command Set Attributes 00:10:00.554 ========================== 00:10:00.554 Submission Queue Entry Size 00:10:00.554 Max: 64 00:10:00.554 Min: 64 00:10:00.554 Completion Queue Entry Size 00:10:00.554 Max: 16 00:10:00.554 Min: 16 00:10:00.554 Number of Namespaces: 256 00:10:00.554 Compare Command: Supported 00:10:00.554 Write Uncorrectable Command: Not Supported 00:10:00.554 Dataset Management Command: Supported 00:10:00.554 Write Zeroes Command: Supported 00:10:00.554 Set Features Save Field: Supported 00:10:00.554 Reservations: Not Supported 00:10:00.554 Timestamp: Supported 00:10:00.554 Copy: Supported 00:10:00.554 Volatile Write Cache: Present 00:10:00.554 Atomic Write Unit (Normal): 1 00:10:00.554 Atomic Write Unit (PFail): 1 00:10:00.554 Atomic Compare & Write Unit: 1 00:10:00.554 Fused Compare & Write: Not Supported 00:10:00.554 Scatter-Gather List 00:10:00.554 SGL Command Set: Supported 00:10:00.554 SGL Keyed: Not Supported 00:10:00.554 SGL Bit Bucket Descriptor: Not Supported 00:10:00.554 SGL Metadata Pointer: Not Supported 00:10:00.554 Oversized SGL: Not Supported 00:10:00.554 SGL Metadata Address: Not Supported 00:10:00.554 SGL Offset: Not Supported 00:10:00.554 Transport SGL Data Block: Not Supported 00:10:00.554 Replay Protected Memory Block: Not Supported 00:10:00.554 00:10:00.554 Firmware Slot Information 00:10:00.554 ========================= 00:10:00.554 Active slot: 1 00:10:00.554 Slot 1 Firmware Revision: 1.0 00:10:00.554 00:10:00.554 00:10:00.554 Commands Supported and Effects 00:10:00.554 ============================== 00:10:00.554 Admin Commands 00:10:00.554 -------------- 00:10:00.554 Delete I/O Submission Queue (00h): Supported 00:10:00.554 Create I/O Submission Queue (01h): Supported 00:10:00.554 Get Log Page (02h): Supported 00:10:00.554 Delete I/O Completion Queue (04h): Supported 00:10:00.554 Create I/O Completion Queue (05h): Supported 00:10:00.554 Identify (06h): Supported 00:10:00.554 Abort (08h): Supported 00:10:00.554 Set Features (09h): Supported 00:10:00.554 Get Features (0Ah): Supported 00:10:00.554 Asynchronous Event Request (0Ch): Supported 00:10:00.554 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:00.554 Directive Send (19h): Supported 00:10:00.554 Directive Receive (1Ah): Supported 00:10:00.554 Virtualization Management (1Ch): Supported 00:10:00.554 Doorbell Buffer Config (7Ch): Supported 00:10:00.554 Format NVM (80h): Supported LBA-Change 00:10:00.554 I/O Commands 00:10:00.554 ------------ 00:10:00.554 Flush (00h): Supported LBA-Change 00:10:00.554 Write (01h): Supported LBA-Change 00:10:00.554 Read (02h): Supported 00:10:00.554 Compare (05h): Supported 00:10:00.554 Write Zeroes (08h): Supported LBA-Change 00:10:00.554 Dataset Management (09h): Supported LBA-Change 00:10:00.554 Unknown (0Ch): Supported 00:10:00.554 Unknown (12h): Supported 00:10:00.554 Copy (19h): Supported LBA-Change 00:10:00.554 Unknown (1Dh): Supported LBA-Change 00:10:00.554 00:10:00.554 Error Log 00:10:00.554 ========= 00:10:00.554 00:10:00.555 Arbitration 00:10:00.555 =========== 00:10:00.555 Arbitration Burst: no limit 00:10:00.555 00:10:00.555 Power Management 00:10:00.555 ================ 00:10:00.555 Number of Power States: 1 00:10:00.555 Current Power State: Power State #0 00:10:00.555 Power State #0: 00:10:00.555 Max Power: 25.00 W 00:10:00.555 Non-Operational State: Operational 00:10:00.555 Entry Latency: 16 microseconds 00:10:00.555 Exit Latency: 4 microseconds 00:10:00.555 Relative Read Throughput: 0 00:10:00.555 Relative Read Latency: 0 00:10:00.555 Relative Write Throughput: 0 00:10:00.555 Relative Write Latency: 0 00:10:00.555 Idle Power: Not Reported 00:10:00.555 Active Power: Not Reported 00:10:00.555 Non-Operational Permissive Mode: Not Supported 00:10:00.555 00:10:00.555 Health Information 00:10:00.555 ================== 00:10:00.555 Critical Warnings: 00:10:00.555 Available Spare Space: OK 00:10:00.555 Temperature: OK 00:10:00.555 Device Reliability: OK 00:10:00.555 Read Only: No 00:10:00.555 Volatile Memory Backup: OK 00:10:00.555 Current Temperature: 323 Kelvin (50 Celsius) 00:10:00.555 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:00.555 Available Spare: 0% 00:10:00.555 Available Spare Threshold: 0% 00:10:00.555 Life Percentage Used: 0% 00:10:00.555 Data Units Read: 913 00:10:00.555 Data Units Written: 806 00:10:00.555 Host Read Commands: 40225 00:10:00.555 Host Write Commands: 38815 00:10:00.555 Controller Busy Time: 0 minutes 00:10:00.555 Power Cycles: 0 00:10:00.555 Power On Hours: 0 hours 00:10:00.555 Unsafe Shutdowns: 0 00:10:00.555 Unrecoverable Media Errors: 0 00:10:00.555 Lifetime Error Log Entries: 0 00:10:00.555 Warning Temperature Time: 0 minutes 00:10:00.555 Critical Temperature Time: 0 minutes 00:10:00.555 00:10:00.555 Number of Queues 00:10:00.555 ================ 00:10:00.555 Number of I/O Submission Queues: 64 00:10:00.555 Number of I/O Completion Queues: 64 00:10:00.555 00:10:00.555 ZNS Specific Controller Data 00:10:00.555 ============================ 00:10:00.555 Zone Append Size Limit: 0 00:10:00.555 00:10:00.555 00:10:00.555 Active Namespaces 00:10:00.555 ================= 00:10:00.555 Namespace ID:1 00:10:00.555 Error Recovery Timeout: Unlimited 00:10:00.555 Command Set Identifier: NVM (00h) 00:10:00.555 Deallocate: Supported 00:10:00.555 Deallocated/Unwritten Error: Supported 00:10:00.555 Deallocated Read Value: All 0x00 00:10:00.555 Deallocate in Write Zeroes: Not Supported 00:10:00.555 Deallocated Guard Field: 0xFFFF 00:10:00.555 Flush: Supported 00:10:00.555 Reservation: Not Supported 00:10:00.555 Namespace Sharing Capabilities: Multiple Controllers 00:10:00.555 Size (in LBAs): 262144 (1GiB) 00:10:00.555 Capacity (in LBAs): 262144 (1GiB) 00:10:00.555 Utilization (in LBAs): 262144 (1GiB) 00:10:00.555 Thin Provisioning: Not Supported 00:10:00.555 Per-NS Atomic Units: No 00:10:00.555 Maximum Single Source Range Length: 128 00:10:00.555 Maximum Copy Length: 128 00:10:00.555 Maximum Source Range Count: 128 00:10:00.555 NGUID/EUI64 Never Reused: No 00:10:00.555 Namespace Write Protected: No 00:10:00.555 Endurance group ID: 1 00:10:00.555 Number of LBA Formats: 8 00:10:00.555 Current LBA Format: LBA Format #04 00:10:00.555 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:00.555 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:00.555 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:00.555 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:00.555 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:00.555 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:00.555 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:00.555 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:00.555 00:10:00.555 Get Feature FDP: 00:10:00.555 ================ 00:10:00.555 Enabled: Yes 00:10:00.555 FDP configuration index: 0 00:10:00.555 00:10:00.555 FDP configurations log page 00:10:00.555 =========================== 00:10:00.555 Number of FDP configurations: 1 00:10:00.555 Version: 0 00:10:00.555 Size: 112 00:10:00.555 FDP Configuration Descriptor: 0 00:10:00.555 Descriptor Size: 96 00:10:00.555 Reclaim Group Identifier format: 2 00:10:00.555 FDP Volatile Write Cache: Not Present 00:10:00.555 FDP Configuration: Valid 00:10:00.555 Vendor Specific Size: 0 00:10:00.555 Number of Reclaim Groups: 2 00:10:00.555 Number of Recalim Unit Handles: 8 00:10:00.555 Max Placement Identifiers: 128 00:10:00.555 Number of Namespaces Suppprted: 256 00:10:00.555 Reclaim unit Nominal Size: 6000000 bytes 00:10:00.555 Estimated Reclaim Unit Time Limit: Not Reported 00:10:00.555 RUH Desc #000: RUH Type: Initially Isolated 00:10:00.555 RUH Desc #001: RUH Type: Initially Isolated 00:10:00.555 RUH Desc #002: RUH Type: Initially Isolated 00:10:00.555 RUH Desc #003: RUH Type: Initially Isolated 00:10:00.555 RUH Desc #004: RUH Type: Initially Isolated 00:10:00.555 RUH Desc #005: RUH Type: Initially Isolated 00:10:00.555 RUH Desc #006: RUH Type: Initially Isolated 00:10:00.555 RUH Desc #007: RUH Type: Initially Isolated 00:10:00.555 00:10:00.555 FDP reclaim unit handle usage log page 00:10:00.555 ====================================== 00:10:00.555 Number of Reclaim Unit Handles: 8 00:10:00.555 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:00.555 RUH Usage Desc #001: RUH Attributes: Unused 00:10:00.555 RUH Usage Desc #002: RUH Attributes: Unused 00:10:00.555 RUH Usage Desc #003: RUH Attributes: Unused 00:10:00.555 RUH Usage Desc #004: RUH Attributes: Unused 00:10:00.555 RUH Usage Desc #005: RUH Attributes: Unused 00:10:00.555 RUH Usage Desc #006: RUH Attributes: Unused 00:10:00.555 RUH Usage Desc #007: RUH Attributes: Unused 00:10:00.555 00:10:00.555 FDP statistics log page 00:10:00.555 ======================= 00:10:00.555 Host bytes with metadata written: 500998144 00:10:00.555 Media bytes with metadata written: 501051392 00:10:00.555 Media bytes erased: 0 00:10:00.555 00:10:00.555 FDP events log page 00:10:00.555 =================== 00:10:00.555 Number of FDP events: 0 00:10:00.555 00:10:00.555 NVM Specific Namespace Data 00:10:00.555 =========================== 00:10:00.555 Logical Block Storage Tag Mask: 0 00:10:00.555 Protection Information Capabilities: 00:10:00.555 16b Guard Protection Information Storage Tag Support: No 00:10:00.555 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:00.555 Storage Tag Check Read Support: No 00:10:00.555 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.555 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.555 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.555 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.555 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.555 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.555 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.555 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.555 ===================================================== 00:10:00.555 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:00.555 ===================================================== 00:10:00.555 Controller Capabilities/Features 00:10:00.555 ================================ 00:10:00.555 Vendor ID: 1b36 00:10:00.555 Subsystem Vendor ID: 1af4 00:10:00.555 Serial Number: 12342 00:10:00.555 Model Number: QEMU NVMe Ctrl 00:10:00.555 Firmware Version: 8.0.0 00:10:00.555 Recommended Arb Burst: 6 00:10:00.555 IEEE OUI Identifier: 00 54 52 00:10:00.556 Multi-path I/O 00:10:00.556 May have multiple subsystem ports: No 00:10:00.556 May have multiple controllers: No 00:10:00.556 Associated with SR-IOV VF: No 00:10:00.556 Max Data Transfer Size: 524288 00:10:00.556 Max Number of Namespaces: 256 00:10:00.556 Max Number of I/O Queues: 64 00:10:00.556 NVMe Specification Version (VS): 1.4 00:10:00.556 NVMe Specification Version (Identify): 1.4 00:10:00.556 Maximum Queue Entries: 2048 00:10:00.556 Contiguous Queues Required: Yes 00:10:00.556 Arbitration Mechanisms Supported 00:10:00.556 Weighted Round Robin: Not Supported 00:10:00.556 Vendor Specific: Not Supported 00:10:00.556 Reset Timeout: 7500 ms 00:10:00.556 Doorbell Stride: 4 bytes 00:10:00.556 NVM Subsystem Reset: Not Supported 00:10:00.556 Command Sets Supported 00:10:00.556 NVM Command Set: Supported 00:10:00.556 Boot Partition: Not Supported 00:10:00.556 Memory Page Size Minimum: 4096 bytes 00:10:00.556 Memory Page Size Maximum: 65536 bytes 00:10:00.556 Persistent Memory Region: Not Supported 00:10:00.556 Optional Asynchronous Events Supported 00:10:00.556 Namespace Attribute Notices: Supported 00:10:00.556 Firmware Activation Notices: Not Supported 00:10:00.556 ANA Change Notices: Not Supported 00:10:00.556 PLE Aggregate Log Change Notices: Not Supported 00:10:00.556 LBA Status Info Alert Notices: Not Supported 00:10:00.556 EGE Aggregate Log Change Notices: Not Supported 00:10:00.556 Normal NVM Subsystem Shutdown event: Not Supported 00:10:00.556 Zone Descriptor Change Notices: Not Supported 00:10:00.556 Discovery Log Change Notices: Not Supported 00:10:00.556 Controller Attributes 00:10:00.556 128-bit Host Identifier: Not Supported 00:10:00.556 Non-Operational Permissive Mode: Not Supported 00:10:00.556 NVM Sets: Not Supported 00:10:00.556 Read Recovery Levels: Not Supported 00:10:00.556 Endurance Groups: Not Supported 00:10:00.556 Predictable Latency Mode: Not Supported 00:10:00.556 Traffic Based Keep ALive: Not Supported 00:10:00.556 Namespace Granularity: Not Supported 00:10:00.556 SQ Associations: Not Supported 00:10:00.556 UUID List: Not Supported 00:10:00.556 Multi-Domain Subsystem: Not Supported 00:10:00.556 Fixed Capacity Management: Not Supported 00:10:00.556 Variable Capacity Management: Not Supported 00:10:00.556 Delete Endurance Group: Not Supported 00:10:00.556 Delete NVM Set: Not Supported 00:10:00.556 Extended LBA Formats Supported: Supported 00:10:00.556 Flexible Data Placement Supported: Not Supported 00:10:00.556 00:10:00.556 Controller Memory Buffer Support 00:10:00.556 ================================ 00:10:00.556 Supported: No 00:10:00.556 00:10:00.556 Persistent Memory Region Support 00:10:00.556 ================================ 00:10:00.556 Supported: No 00:10:00.556 00:10:00.556 Admin Command Set Attributes 00:10:00.556 ============================ 00:10:00.556 Security Send/Receive: Not Supported 00:10:00.556 Format NVM: Supported 00:10:00.556 Firmware Activate/Download: Not Supported 00:10:00.556 Namespace Management: Supported 00:10:00.556 Device Self-Test: Not Supported 00:10:00.556 Directives: Supported 00:10:00.556 NVMe-MI: Not Supported 00:10:00.556 Virtualization Management: Not Supported 00:10:00.556 Doorbell Buffer Config: Supported 00:10:00.556 Get LBA Status Capability: Not Supported 00:10:00.556 Command & Feature Lockdown Capability: Not Supported 00:10:00.556 Abort Command Limit: 4 00:10:00.556 Async Event Request Limit: 4 00:10:00.556 Number of Firmware Slots: N/A 00:10:00.556 Firmware Slot 1 Read-Only: N/A 00:10:00.556 Firmware Activation Without Reset: N/A 00:10:00.556 Multiple Update Detection Support: N/A 00:10:00.556 Firmware Update Granularity: No Information Provided 00:10:00.556 Per-Namespace SMART Log: Yes 00:10:00.556 Asymmetric Namespace Access Log Page: Not Supported 00:10:00.556 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:00.556 Command Effects Log Page: Supported 00:10:00.556 Get Log Page Extended Data: Supported 00:10:00.556 Telemetry Log Pages: Not Supported 00:10:00.556 Persistent Event Log Pages: Not Supported 00:10:00.556 Supported Log Pages Log Page: May Support 00:10:00.556 Commands Supported & Effects Log Page: Not Supported 00:10:00.556 Feature Identifiers & Effects Log Page:May Support 00:10:00.556 NVMe-MI Commands & Effects Log Page: May Support 00:10:00.556 Data Area 4 for Telemetry Log: Not Supported 00:10:00.556 Error Log Page Entries Supported: 1 00:10:00.556 Keep Alive: Not Supported 00:10:00.556 00:10:00.556 NVM Command Set Attributes 00:10:00.556 ========================== 00:10:00.556 Submission Queue Entry Size 00:10:00.556 Max: 64 00:10:00.556 Min: 64 00:10:00.556 Completion Queue Entry Size 00:10:00.556 Max: 16 00:10:00.556 Min: 16 00:10:00.556 Number of Namespaces: 256 00:10:00.556 Compare Command: Supported 00:10:00.556 Write Uncorrectable Command: Not Supported 00:10:00.556 Dataset Management Command: Supported 00:10:00.556 Write Zeroes Command: Supported 00:10:00.556 Set Features Save Field: Supported 00:10:00.556 Reservations: Not Supported 00:10:00.556 Timestamp: Supported 00:10:00.556 Copy: Supported 00:10:00.556 Volatile Write Cache: Present 00:10:00.556 Atomic Write Unit (Normal): 1 00:10:00.556 Atomic Write Unit (PFail): 1 00:10:00.556 Atomic Compare & Write Unit: 1 00:10:00.556 Fused Compare & Write: Not Supported 00:10:00.556 Scatter-Gather List 00:10:00.556 SGL Command Set: Supported 00:10:00.556 SGL Keyed: Not Supported 00:10:00.556 SGL Bit Bucket Descriptor: Not Supported 00:10:00.556 SGL Metadata Pointer: Not Supported 00:10:00.556 Oversized SGL: Not Supported 00:10:00.556 SGL Metadata Address: Not Supported 00:10:00.556 SGL Offset: Not Supported 00:10:00.556 Transport SGL Data Block: Not Supported 00:10:00.556 Replay Protected Memory Block: Not Supported 00:10:00.556 00:10:00.556 Firmware Slot Information 00:10:00.556 ========================= 00:10:00.556 Active slot: 1 00:10:00.556 Slot 1 Firmware Revision: 1.0 00:10:00.556 00:10:00.557 00:10:00.557 Commands Supported and Effects 00:10:00.557 ============================== 00:10:00.557 Admin Commands 00:10:00.557 -------------- 00:10:00.557 Delete I/O Submission Queue (00h): Supported 00:10:00.557 Create I/O Submission Queue (01h): Supported 00:10:00.557 Get Log Page (02h): Supported 00:10:00.557 Delete I/O Completion Queue (04h): Supported 00:10:00.557 Create I/O Completion Queue (05h): Supported 00:10:00.557 Identify (06h): Supported 00:10:00.557 Abort (08h): Supported 00:10:00.557 Set Features (09h): Supported 00:10:00.557 Get Features (0Ah): Supported 00:10:00.557 Asynchronous Event Request (0Ch): Supported 00:10:00.557 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:00.557 Directive Send (19h): Supported 00:10:00.557 Directive Receive (1Ah): Supported 00:10:00.557 Virtualization Management (1Ch): Supported 00:10:00.557 Doorbell Buffer Config (7Ch): Supported 00:10:00.557 Format NVM (80h): Supported LBA-Change 00:10:00.557 I/O Commands 00:10:00.557 ------------ 00:10:00.557 Flush (00h): Supported LBA-Change 00:10:00.557 Write (01h): Supported LBA-Change 00:10:00.557 Read (02h): Supported 00:10:00.557 Compare (05h): Supported 00:10:00.557 Write Zeroes (08h): Supported LBA-Change 00:10:00.557 Dataset Management (09h): Supported LBA-Change 00:10:00.557 Unknown (0Ch): Supported 00:10:00.557 Unknown (12h): Supported 00:10:00.557 Copy (19h): Supported LBA-Change 00:10:00.557 Unknown (1Dh): Supported LBA-Change 00:10:00.557 00:10:00.557 Error Log 00:10:00.557 ========= 00:10:00.557 00:10:00.557 Arbitration 00:10:00.557 =========== 00:10:00.557 Arbitration Burst: no limit 00:10:00.557 00:10:00.557 Power Management 00:10:00.557 ================ 00:10:00.557 Number of Power States: 1 00:10:00.557 Current Power State: Power State #0 00:10:00.557 Power State #0: 00:10:00.557 Max Power: 25.00 W 00:10:00.557 Non-Operational State: Operational 00:10:00.557 Entry Latency: 16 microseconds 00:10:00.557 Exit Latency: 4 microseconds 00:10:00.557 Relative Read Throughput: 0 00:10:00.557 Relative Read Latency: 0 00:10:00.557 Relative Write Throughput: 0 00:10:00.557 Relative Write Latency: 0 00:10:00.557 Idle Power: Not Reported 00:10:00.557 Active Power: Not Reported 00:10:00.557 Non-Operational Permissive Mode: Not Supported 00:10:00.557 00:10:00.557 Health Information 00:10:00.557 ================== 00:10:00.557 Critical Warnings: 00:10:00.557 Available Spare Space: OK 00:10:00.557 Temperature: OK 00:10:00.557 Device Reliability: OK 00:10:00.557 Read Only: No 00:10:00.557 Volatile Memory Backup: OK 00:10:00.557 Current Temperature: 323 Kelvin (50 Celsius) 00:10:00.557 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:00.557 Available Spare: 0% 00:10:00.557 Available Spare Threshold: 0% 00:10:00.557 Life Percentage Used: 0% 00:10:00.557 Data Units Read: 2495 00:10:00.557 Data Units Written: 2175 00:10:00.557 Host Read Commands: 118475 00:10:00.557 Host Write Commands: 114245 00:10:00.557 Controller Busy Time: 0 minutes 00:10:00.557 Power Cycles: 0 00:10:00.557 Power On Hours: 0 hours 00:10:00.557 Unsafe Shutdowns: 0 00:10:00.557 Unrecoverable Media Errors: 0 00:10:00.557 Lifetime Error Log Entries: 0 00:10:00.557 Warning Temperature Time: 0 minutes 00:10:00.557 Critical Temperature Time: 0 minutes 00:10:00.557 00:10:00.557 Number of Queues 00:10:00.557 ================ 00:10:00.557 Number of I/O Submission Queues: 64 00:10:00.557 Number of I/O Completion Queues: 64 00:10:00.557 00:10:00.557 ZNS Specific Controller Data 00:10:00.557 ============================ 00:10:00.557 Zone Append Size Limit: 0 00:10:00.557 00:10:00.557 00:10:00.557 Active Namespaces 00:10:00.557 ================= 00:10:00.557 Namespace ID:1 00:10:00.557 Error Recovery Timeout: Unlimited 00:10:00.557 Command Set Identifier: NVM (00h) 00:10:00.557 Deallocate: Supported 00:10:00.557 Deallocated/Unwritten Error: Supported 00:10:00.557 Deallocated Read Value: All 0x00 00:10:00.557 Deallocate in Write Zeroes: Not Supported 00:10:00.557 Deallocated Guard Field: 0xFFFF 00:10:00.557 Flush: Supported 00:10:00.557 Reservation: Not Supported 00:10:00.557 Namespace Sharing Capabilities: Private 00:10:00.557 Size (in LBAs): 1048576 (4GiB) 00:10:00.557 Capacity (in LBAs): 1048576 (4GiB) 00:10:00.557 Utilization (in LBAs): 1048576 (4GiB) 00:10:00.557 Thin Provisioning: Not Supported 00:10:00.557 Per-NS Atomic Units: No 00:10:00.557 Maximum Single Source Range Length: 128 00:10:00.557 Maximum Copy Length: 128 00:10:00.557 Maximum Source Range Count: 128 00:10:00.557 NGUID/EUI64 Never Reused: No 00:10:00.557 Namespace Write Protected: No 00:10:00.557 Number of LBA Formats: 8 00:10:00.557 Current LBA Format: LBA Format #04 00:10:00.557 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:00.557 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:00.557 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:00.557 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:00.557 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:00.557 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:00.557 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:00.557 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:00.557 00:10:00.557 NVM Specific Namespace Data 00:10:00.557 =========================== 00:10:00.557 Logical Block Storage Tag Mask: 0 00:10:00.557 Protection Information Capabilities: 00:10:00.557 16b Guard Protection Information Storage Tag Support: No 00:10:00.557 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:00.557 Storage Tag Check Read Support: No 00:10:00.557 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.557 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.557 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.557 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.557 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.557 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.557 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.557 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.557 Namespace ID:2 00:10:00.557 Error Recovery Timeout: Unlimited 00:10:00.557 Command Set Identifier: NVM (00h) 00:10:00.557 Deallocate: Supported 00:10:00.557 Deallocated/Unwritten Error: Supported 00:10:00.557 Deallocated Read Value: All 0x00 00:10:00.557 Deallocate in Write Zeroes: Not Supported 00:10:00.557 Deallocated Guard Field: 0xFFFF 00:10:00.557 Flush: Supported 00:10:00.557 Reservation: Not Supported 00:10:00.557 Namespace Sharing Capabilities: Private 00:10:00.557 Size (in LBAs): 1048576 (4GiB) 00:10:00.557 Capacity (in LBAs): 1048576 (4GiB) 00:10:00.557 Utilization (in LBAs): 1048576 (4GiB) 00:10:00.557 Thin Provisioning: Not Supported 00:10:00.557 Per-NS Atomic Units: No 00:10:00.557 Maximum Single Source Range Length: 128 00:10:00.557 Maximum Copy Length: 128 00:10:00.557 Maximum Source Range Count: 128 00:10:00.557 NGUID/EUI64 Never Reused: No 00:10:00.557 Namespace Write Protected: No 00:10:00.557 Number of LBA Formats: 8 00:10:00.557 Current LBA Format: LBA Format #04 00:10:00.557 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:00.557 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:00.557 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:00.558 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:00.558 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:00.558 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:00.558 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:00.558 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:00.558 00:10:00.558 NVM Specific Namespace Data 00:10:00.558 =========================== 00:10:00.558 Logical Block Storage Tag Mask: 0 00:10:00.558 Protection Information Capabilities: 00:10:00.558 16b Guard Protection Information Storage Tag Support: No 00:10:00.558 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:00.558 Storage Tag Check Read Support: No 00:10:00.558 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 Namespace ID:3 00:10:00.558 Error Recovery Timeout: Unlimited 00:10:00.558 Command Set Identifier: NVM (00h) 00:10:00.558 Deallocate: Supported 00:10:00.558 Deallocated/Unwritten Error: Supported 00:10:00.558 Deallocated Read Value: All 0x00 00:10:00.558 Deallocate in Write Zeroes: Not Supported 00:10:00.558 Deallocated Guard Field: 0xFFFF 00:10:00.558 Flush: Supported 00:10:00.558 Reservation: Not Supported 00:10:00.558 Namespace Sharing Capabilities: Private 00:10:00.558 Size (in LBAs): 1048576 (4GiB) 00:10:00.558 Capacity (in LBAs): [2024-07-26 12:03:48.275956] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 68298 terminated unexpected 00:10:00.558 1048576 (4GiB) 00:10:00.558 Utilization (in LBAs): 1048576 (4GiB) 00:10:00.558 Thin Provisioning: Not Supported 00:10:00.558 Per-NS Atomic Units: No 00:10:00.558 Maximum Single Source Range Length: 128 00:10:00.558 Maximum Copy Length: 128 00:10:00.558 Maximum Source Range Count: 128 00:10:00.558 NGUID/EUI64 Never Reused: No 00:10:00.558 Namespace Write Protected: No 00:10:00.558 Number of LBA Formats: 8 00:10:00.558 Current LBA Format: LBA Format #04 00:10:00.558 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:00.558 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:00.558 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:00.558 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:00.558 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:00.558 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:00.558 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:00.558 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:00.558 00:10:00.558 NVM Specific Namespace Data 00:10:00.558 =========================== 00:10:00.558 Logical Block Storage Tag Mask: 0 00:10:00.558 Protection Information Capabilities: 00:10:00.558 16b Guard Protection Information Storage Tag Support: No 00:10:00.558 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:00.558 Storage Tag Check Read Support: No 00:10:00.558 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.558 12:03:48 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:00.558 12:03:48 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:00.817 ===================================================== 00:10:00.817 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:00.817 ===================================================== 00:10:00.817 Controller Capabilities/Features 00:10:00.817 ================================ 00:10:00.817 Vendor ID: 1b36 00:10:00.817 Subsystem Vendor ID: 1af4 00:10:00.817 Serial Number: 12340 00:10:00.817 Model Number: QEMU NVMe Ctrl 00:10:00.817 Firmware Version: 8.0.0 00:10:00.817 Recommended Arb Burst: 6 00:10:00.817 IEEE OUI Identifier: 00 54 52 00:10:00.817 Multi-path I/O 00:10:00.817 May have multiple subsystem ports: No 00:10:00.817 May have multiple controllers: No 00:10:00.817 Associated with SR-IOV VF: No 00:10:00.817 Max Data Transfer Size: 524288 00:10:00.817 Max Number of Namespaces: 256 00:10:00.817 Max Number of I/O Queues: 64 00:10:00.817 NVMe Specification Version (VS): 1.4 00:10:00.817 NVMe Specification Version (Identify): 1.4 00:10:00.817 Maximum Queue Entries: 2048 00:10:00.817 Contiguous Queues Required: Yes 00:10:00.817 Arbitration Mechanisms Supported 00:10:00.817 Weighted Round Robin: Not Supported 00:10:00.817 Vendor Specific: Not Supported 00:10:00.817 Reset Timeout: 7500 ms 00:10:00.817 Doorbell Stride: 4 bytes 00:10:00.817 NVM Subsystem Reset: Not Supported 00:10:00.817 Command Sets Supported 00:10:00.817 NVM Command Set: Supported 00:10:00.817 Boot Partition: Not Supported 00:10:00.817 Memory Page Size Minimum: 4096 bytes 00:10:00.817 Memory Page Size Maximum: 65536 bytes 00:10:00.817 Persistent Memory Region: Not Supported 00:10:00.817 Optional Asynchronous Events Supported 00:10:00.817 Namespace Attribute Notices: Supported 00:10:00.817 Firmware Activation Notices: Not Supported 00:10:00.817 ANA Change Notices: Not Supported 00:10:00.817 PLE Aggregate Log Change Notices: Not Supported 00:10:00.817 LBA Status Info Alert Notices: Not Supported 00:10:00.817 EGE Aggregate Log Change Notices: Not Supported 00:10:00.817 Normal NVM Subsystem Shutdown event: Not Supported 00:10:00.817 Zone Descriptor Change Notices: Not Supported 00:10:00.817 Discovery Log Change Notices: Not Supported 00:10:00.817 Controller Attributes 00:10:00.817 128-bit Host Identifier: Not Supported 00:10:00.817 Non-Operational Permissive Mode: Not Supported 00:10:00.817 NVM Sets: Not Supported 00:10:00.817 Read Recovery Levels: Not Supported 00:10:00.817 Endurance Groups: Not Supported 00:10:00.817 Predictable Latency Mode: Not Supported 00:10:00.817 Traffic Based Keep ALive: Not Supported 00:10:00.817 Namespace Granularity: Not Supported 00:10:00.817 SQ Associations: Not Supported 00:10:00.817 UUID List: Not Supported 00:10:00.817 Multi-Domain Subsystem: Not Supported 00:10:00.817 Fixed Capacity Management: Not Supported 00:10:00.817 Variable Capacity Management: Not Supported 00:10:00.817 Delete Endurance Group: Not Supported 00:10:00.818 Delete NVM Set: Not Supported 00:10:00.818 Extended LBA Formats Supported: Supported 00:10:00.818 Flexible Data Placement Supported: Not Supported 00:10:00.818 00:10:00.818 Controller Memory Buffer Support 00:10:00.818 ================================ 00:10:00.818 Supported: No 00:10:00.818 00:10:00.818 Persistent Memory Region Support 00:10:00.818 ================================ 00:10:00.818 Supported: No 00:10:00.818 00:10:00.818 Admin Command Set Attributes 00:10:00.818 ============================ 00:10:00.818 Security Send/Receive: Not Supported 00:10:00.818 Format NVM: Supported 00:10:00.818 Firmware Activate/Download: Not Supported 00:10:00.818 Namespace Management: Supported 00:10:00.818 Device Self-Test: Not Supported 00:10:00.818 Directives: Supported 00:10:00.818 NVMe-MI: Not Supported 00:10:00.818 Virtualization Management: Not Supported 00:10:00.818 Doorbell Buffer Config: Supported 00:10:00.818 Get LBA Status Capability: Not Supported 00:10:00.818 Command & Feature Lockdown Capability: Not Supported 00:10:00.818 Abort Command Limit: 4 00:10:00.818 Async Event Request Limit: 4 00:10:00.818 Number of Firmware Slots: N/A 00:10:00.818 Firmware Slot 1 Read-Only: N/A 00:10:00.818 Firmware Activation Without Reset: N/A 00:10:00.818 Multiple Update Detection Support: N/A 00:10:00.818 Firmware Update Granularity: No Information Provided 00:10:00.818 Per-Namespace SMART Log: Yes 00:10:00.818 Asymmetric Namespace Access Log Page: Not Supported 00:10:00.818 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:00.818 Command Effects Log Page: Supported 00:10:00.818 Get Log Page Extended Data: Supported 00:10:00.818 Telemetry Log Pages: Not Supported 00:10:00.818 Persistent Event Log Pages: Not Supported 00:10:00.818 Supported Log Pages Log Page: May Support 00:10:00.818 Commands Supported & Effects Log Page: Not Supported 00:10:00.818 Feature Identifiers & Effects Log Page:May Support 00:10:00.818 NVMe-MI Commands & Effects Log Page: May Support 00:10:00.818 Data Area 4 for Telemetry Log: Not Supported 00:10:00.818 Error Log Page Entries Supported: 1 00:10:00.818 Keep Alive: Not Supported 00:10:00.818 00:10:00.818 NVM Command Set Attributes 00:10:00.818 ========================== 00:10:00.818 Submission Queue Entry Size 00:10:00.818 Max: 64 00:10:00.818 Min: 64 00:10:00.818 Completion Queue Entry Size 00:10:00.818 Max: 16 00:10:00.818 Min: 16 00:10:00.818 Number of Namespaces: 256 00:10:00.818 Compare Command: Supported 00:10:00.818 Write Uncorrectable Command: Not Supported 00:10:00.818 Dataset Management Command: Supported 00:10:00.818 Write Zeroes Command: Supported 00:10:00.818 Set Features Save Field: Supported 00:10:00.818 Reservations: Not Supported 00:10:00.818 Timestamp: Supported 00:10:00.818 Copy: Supported 00:10:00.818 Volatile Write Cache: Present 00:10:00.818 Atomic Write Unit (Normal): 1 00:10:00.818 Atomic Write Unit (PFail): 1 00:10:00.818 Atomic Compare & Write Unit: 1 00:10:00.818 Fused Compare & Write: Not Supported 00:10:00.818 Scatter-Gather List 00:10:00.818 SGL Command Set: Supported 00:10:00.818 SGL Keyed: Not Supported 00:10:00.818 SGL Bit Bucket Descriptor: Not Supported 00:10:00.818 SGL Metadata Pointer: Not Supported 00:10:00.818 Oversized SGL: Not Supported 00:10:00.818 SGL Metadata Address: Not Supported 00:10:00.818 SGL Offset: Not Supported 00:10:00.818 Transport SGL Data Block: Not Supported 00:10:00.818 Replay Protected Memory Block: Not Supported 00:10:00.818 00:10:00.818 Firmware Slot Information 00:10:00.818 ========================= 00:10:00.818 Active slot: 1 00:10:00.818 Slot 1 Firmware Revision: 1.0 00:10:00.818 00:10:00.818 00:10:00.818 Commands Supported and Effects 00:10:00.818 ============================== 00:10:00.818 Admin Commands 00:10:00.818 -------------- 00:10:00.818 Delete I/O Submission Queue (00h): Supported 00:10:00.818 Create I/O Submission Queue (01h): Supported 00:10:00.818 Get Log Page (02h): Supported 00:10:00.818 Delete I/O Completion Queue (04h): Supported 00:10:00.818 Create I/O Completion Queue (05h): Supported 00:10:00.818 Identify (06h): Supported 00:10:00.818 Abort (08h): Supported 00:10:00.818 Set Features (09h): Supported 00:10:00.818 Get Features (0Ah): Supported 00:10:00.818 Asynchronous Event Request (0Ch): Supported 00:10:00.818 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:00.818 Directive Send (19h): Supported 00:10:00.818 Directive Receive (1Ah): Supported 00:10:00.818 Virtualization Management (1Ch): Supported 00:10:00.818 Doorbell Buffer Config (7Ch): Supported 00:10:00.818 Format NVM (80h): Supported LBA-Change 00:10:00.818 I/O Commands 00:10:00.818 ------------ 00:10:00.818 Flush (00h): Supported LBA-Change 00:10:00.818 Write (01h): Supported LBA-Change 00:10:00.818 Read (02h): Supported 00:10:00.818 Compare (05h): Supported 00:10:00.818 Write Zeroes (08h): Supported LBA-Change 00:10:00.818 Dataset Management (09h): Supported LBA-Change 00:10:00.818 Unknown (0Ch): Supported 00:10:00.818 Unknown (12h): Supported 00:10:00.818 Copy (19h): Supported LBA-Change 00:10:00.818 Unknown (1Dh): Supported LBA-Change 00:10:00.818 00:10:00.818 Error Log 00:10:00.818 ========= 00:10:00.818 00:10:00.818 Arbitration 00:10:00.818 =========== 00:10:00.818 Arbitration Burst: no limit 00:10:00.818 00:10:00.818 Power Management 00:10:00.818 ================ 00:10:00.818 Number of Power States: 1 00:10:00.818 Current Power State: Power State #0 00:10:00.818 Power State #0: 00:10:00.818 Max Power: 25.00 W 00:10:00.818 Non-Operational State: Operational 00:10:00.818 Entry Latency: 16 microseconds 00:10:00.818 Exit Latency: 4 microseconds 00:10:00.818 Relative Read Throughput: 0 00:10:00.818 Relative Read Latency: 0 00:10:00.818 Relative Write Throughput: 0 00:10:00.818 Relative Write Latency: 0 00:10:00.818 Idle Power: Not Reported 00:10:00.818 Active Power: Not Reported 00:10:00.818 Non-Operational Permissive Mode: Not Supported 00:10:00.818 00:10:00.818 Health Information 00:10:00.818 ================== 00:10:00.818 Critical Warnings: 00:10:00.818 Available Spare Space: OK 00:10:00.818 Temperature: OK 00:10:00.818 Device Reliability: OK 00:10:00.818 Read Only: No 00:10:00.818 Volatile Memory Backup: OK 00:10:00.818 Current Temperature: 323 Kelvin (50 Celsius) 00:10:00.818 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:00.818 Available Spare: 0% 00:10:00.818 Available Spare Threshold: 0% 00:10:00.819 Life Percentage Used: 0% 00:10:00.819 Data Units Read: 791 00:10:00.819 Data Units Written: 683 00:10:00.819 Host Read Commands: 38763 00:10:00.819 Host Write Commands: 37801 00:10:00.819 Controller Busy Time: 0 minutes 00:10:00.819 Power Cycles: 0 00:10:00.819 Power On Hours: 0 hours 00:10:00.819 Unsafe Shutdowns: 0 00:10:00.819 Unrecoverable Media Errors: 0 00:10:00.819 Lifetime Error Log Entries: 0 00:10:00.819 Warning Temperature Time: 0 minutes 00:10:00.819 Critical Temperature Time: 0 minutes 00:10:00.819 00:10:00.819 Number of Queues 00:10:00.819 ================ 00:10:00.819 Number of I/O Submission Queues: 64 00:10:00.819 Number of I/O Completion Queues: 64 00:10:00.819 00:10:00.819 ZNS Specific Controller Data 00:10:00.819 ============================ 00:10:00.819 Zone Append Size Limit: 0 00:10:00.819 00:10:00.819 00:10:00.819 Active Namespaces 00:10:00.819 ================= 00:10:00.819 Namespace ID:1 00:10:00.819 Error Recovery Timeout: Unlimited 00:10:00.819 Command Set Identifier: NVM (00h) 00:10:00.819 Deallocate: Supported 00:10:00.819 Deallocated/Unwritten Error: Supported 00:10:00.819 Deallocated Read Value: All 0x00 00:10:00.819 Deallocate in Write Zeroes: Not Supported 00:10:00.819 Deallocated Guard Field: 0xFFFF 00:10:00.819 Flush: Supported 00:10:00.819 Reservation: Not Supported 00:10:00.819 Metadata Transferred as: Separate Metadata Buffer 00:10:00.819 Namespace Sharing Capabilities: Private 00:10:00.819 Size (in LBAs): 1548666 (5GiB) 00:10:00.819 Capacity (in LBAs): 1548666 (5GiB) 00:10:00.819 Utilization (in LBAs): 1548666 (5GiB) 00:10:00.819 Thin Provisioning: Not Supported 00:10:00.819 Per-NS Atomic Units: No 00:10:00.819 Maximum Single Source Range Length: 128 00:10:00.819 Maximum Copy Length: 128 00:10:00.819 Maximum Source Range Count: 128 00:10:00.819 NGUID/EUI64 Never Reused: No 00:10:00.819 Namespace Write Protected: No 00:10:00.819 Number of LBA Formats: 8 00:10:00.819 Current LBA Format: LBA Format #07 00:10:00.819 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:00.819 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:00.819 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:00.819 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:00.819 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:00.819 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:00.819 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:00.819 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:00.819 00:10:00.819 NVM Specific Namespace Data 00:10:00.819 =========================== 00:10:00.819 Logical Block Storage Tag Mask: 0 00:10:00.819 Protection Information Capabilities: 00:10:00.819 16b Guard Protection Information Storage Tag Support: No 00:10:00.819 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:00.819 Storage Tag Check Read Support: No 00:10:00.819 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.819 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.819 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.819 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.819 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.819 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.819 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.819 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:00.819 12:03:48 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:00.819 12:03:48 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:01.078 ===================================================== 00:10:01.078 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:01.078 ===================================================== 00:10:01.078 Controller Capabilities/Features 00:10:01.078 ================================ 00:10:01.078 Vendor ID: 1b36 00:10:01.078 Subsystem Vendor ID: 1af4 00:10:01.078 Serial Number: 12341 00:10:01.078 Model Number: QEMU NVMe Ctrl 00:10:01.078 Firmware Version: 8.0.0 00:10:01.078 Recommended Arb Burst: 6 00:10:01.078 IEEE OUI Identifier: 00 54 52 00:10:01.078 Multi-path I/O 00:10:01.078 May have multiple subsystem ports: No 00:10:01.078 May have multiple controllers: No 00:10:01.078 Associated with SR-IOV VF: No 00:10:01.078 Max Data Transfer Size: 524288 00:10:01.078 Max Number of Namespaces: 256 00:10:01.078 Max Number of I/O Queues: 64 00:10:01.078 NVMe Specification Version (VS): 1.4 00:10:01.078 NVMe Specification Version (Identify): 1.4 00:10:01.078 Maximum Queue Entries: 2048 00:10:01.078 Contiguous Queues Required: Yes 00:10:01.078 Arbitration Mechanisms Supported 00:10:01.078 Weighted Round Robin: Not Supported 00:10:01.078 Vendor Specific: Not Supported 00:10:01.078 Reset Timeout: 7500 ms 00:10:01.078 Doorbell Stride: 4 bytes 00:10:01.078 NVM Subsystem Reset: Not Supported 00:10:01.078 Command Sets Supported 00:10:01.078 NVM Command Set: Supported 00:10:01.078 Boot Partition: Not Supported 00:10:01.078 Memory Page Size Minimum: 4096 bytes 00:10:01.078 Memory Page Size Maximum: 65536 bytes 00:10:01.078 Persistent Memory Region: Not Supported 00:10:01.078 Optional Asynchronous Events Supported 00:10:01.078 Namespace Attribute Notices: Supported 00:10:01.078 Firmware Activation Notices: Not Supported 00:10:01.078 ANA Change Notices: Not Supported 00:10:01.078 PLE Aggregate Log Change Notices: Not Supported 00:10:01.078 LBA Status Info Alert Notices: Not Supported 00:10:01.078 EGE Aggregate Log Change Notices: Not Supported 00:10:01.078 Normal NVM Subsystem Shutdown event: Not Supported 00:10:01.078 Zone Descriptor Change Notices: Not Supported 00:10:01.078 Discovery Log Change Notices: Not Supported 00:10:01.078 Controller Attributes 00:10:01.078 128-bit Host Identifier: Not Supported 00:10:01.078 Non-Operational Permissive Mode: Not Supported 00:10:01.078 NVM Sets: Not Supported 00:10:01.079 Read Recovery Levels: Not Supported 00:10:01.079 Endurance Groups: Not Supported 00:10:01.079 Predictable Latency Mode: Not Supported 00:10:01.079 Traffic Based Keep ALive: Not Supported 00:10:01.079 Namespace Granularity: Not Supported 00:10:01.079 SQ Associations: Not Supported 00:10:01.079 UUID List: Not Supported 00:10:01.079 Multi-Domain Subsystem: Not Supported 00:10:01.079 Fixed Capacity Management: Not Supported 00:10:01.079 Variable Capacity Management: Not Supported 00:10:01.079 Delete Endurance Group: Not Supported 00:10:01.079 Delete NVM Set: Not Supported 00:10:01.079 Extended LBA Formats Supported: Supported 00:10:01.079 Flexible Data Placement Supported: Not Supported 00:10:01.079 00:10:01.079 Controller Memory Buffer Support 00:10:01.079 ================================ 00:10:01.079 Supported: No 00:10:01.079 00:10:01.079 Persistent Memory Region Support 00:10:01.079 ================================ 00:10:01.079 Supported: No 00:10:01.079 00:10:01.079 Admin Command Set Attributes 00:10:01.079 ============================ 00:10:01.079 Security Send/Receive: Not Supported 00:10:01.079 Format NVM: Supported 00:10:01.079 Firmware Activate/Download: Not Supported 00:10:01.079 Namespace Management: Supported 00:10:01.079 Device Self-Test: Not Supported 00:10:01.079 Directives: Supported 00:10:01.079 NVMe-MI: Not Supported 00:10:01.079 Virtualization Management: Not Supported 00:10:01.079 Doorbell Buffer Config: Supported 00:10:01.079 Get LBA Status Capability: Not Supported 00:10:01.079 Command & Feature Lockdown Capability: Not Supported 00:10:01.079 Abort Command Limit: 4 00:10:01.079 Async Event Request Limit: 4 00:10:01.079 Number of Firmware Slots: N/A 00:10:01.079 Firmware Slot 1 Read-Only: N/A 00:10:01.079 Firmware Activation Without Reset: N/A 00:10:01.079 Multiple Update Detection Support: N/A 00:10:01.079 Firmware Update Granularity: No Information Provided 00:10:01.079 Per-Namespace SMART Log: Yes 00:10:01.079 Asymmetric Namespace Access Log Page: Not Supported 00:10:01.079 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:01.079 Command Effects Log Page: Supported 00:10:01.079 Get Log Page Extended Data: Supported 00:10:01.079 Telemetry Log Pages: Not Supported 00:10:01.079 Persistent Event Log Pages: Not Supported 00:10:01.079 Supported Log Pages Log Page: May Support 00:10:01.079 Commands Supported & Effects Log Page: Not Supported 00:10:01.079 Feature Identifiers & Effects Log Page:May Support 00:10:01.079 NVMe-MI Commands & Effects Log Page: May Support 00:10:01.079 Data Area 4 for Telemetry Log: Not Supported 00:10:01.079 Error Log Page Entries Supported: 1 00:10:01.079 Keep Alive: Not Supported 00:10:01.079 00:10:01.079 NVM Command Set Attributes 00:10:01.079 ========================== 00:10:01.079 Submission Queue Entry Size 00:10:01.079 Max: 64 00:10:01.079 Min: 64 00:10:01.079 Completion Queue Entry Size 00:10:01.079 Max: 16 00:10:01.079 Min: 16 00:10:01.079 Number of Namespaces: 256 00:10:01.079 Compare Command: Supported 00:10:01.079 Write Uncorrectable Command: Not Supported 00:10:01.079 Dataset Management Command: Supported 00:10:01.079 Write Zeroes Command: Supported 00:10:01.079 Set Features Save Field: Supported 00:10:01.079 Reservations: Not Supported 00:10:01.079 Timestamp: Supported 00:10:01.079 Copy: Supported 00:10:01.079 Volatile Write Cache: Present 00:10:01.079 Atomic Write Unit (Normal): 1 00:10:01.079 Atomic Write Unit (PFail): 1 00:10:01.079 Atomic Compare & Write Unit: 1 00:10:01.079 Fused Compare & Write: Not Supported 00:10:01.079 Scatter-Gather List 00:10:01.079 SGL Command Set: Supported 00:10:01.079 SGL Keyed: Not Supported 00:10:01.079 SGL Bit Bucket Descriptor: Not Supported 00:10:01.079 SGL Metadata Pointer: Not Supported 00:10:01.079 Oversized SGL: Not Supported 00:10:01.079 SGL Metadata Address: Not Supported 00:10:01.079 SGL Offset: Not Supported 00:10:01.079 Transport SGL Data Block: Not Supported 00:10:01.079 Replay Protected Memory Block: Not Supported 00:10:01.079 00:10:01.079 Firmware Slot Information 00:10:01.079 ========================= 00:10:01.079 Active slot: 1 00:10:01.079 Slot 1 Firmware Revision: 1.0 00:10:01.079 00:10:01.079 00:10:01.079 Commands Supported and Effects 00:10:01.079 ============================== 00:10:01.079 Admin Commands 00:10:01.079 -------------- 00:10:01.079 Delete I/O Submission Queue (00h): Supported 00:10:01.079 Create I/O Submission Queue (01h): Supported 00:10:01.079 Get Log Page (02h): Supported 00:10:01.079 Delete I/O Completion Queue (04h): Supported 00:10:01.079 Create I/O Completion Queue (05h): Supported 00:10:01.079 Identify (06h): Supported 00:10:01.079 Abort (08h): Supported 00:10:01.079 Set Features (09h): Supported 00:10:01.079 Get Features (0Ah): Supported 00:10:01.079 Asynchronous Event Request (0Ch): Supported 00:10:01.079 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:01.079 Directive Send (19h): Supported 00:10:01.079 Directive Receive (1Ah): Supported 00:10:01.079 Virtualization Management (1Ch): Supported 00:10:01.079 Doorbell Buffer Config (7Ch): Supported 00:10:01.079 Format NVM (80h): Supported LBA-Change 00:10:01.079 I/O Commands 00:10:01.079 ------------ 00:10:01.079 Flush (00h): Supported LBA-Change 00:10:01.079 Write (01h): Supported LBA-Change 00:10:01.079 Read (02h): Supported 00:10:01.079 Compare (05h): Supported 00:10:01.079 Write Zeroes (08h): Supported LBA-Change 00:10:01.079 Dataset Management (09h): Supported LBA-Change 00:10:01.079 Unknown (0Ch): Supported 00:10:01.079 Unknown (12h): Supported 00:10:01.079 Copy (19h): Supported LBA-Change 00:10:01.079 Unknown (1Dh): Supported LBA-Change 00:10:01.079 00:10:01.079 Error Log 00:10:01.079 ========= 00:10:01.079 00:10:01.079 Arbitration 00:10:01.079 =========== 00:10:01.079 Arbitration Burst: no limit 00:10:01.079 00:10:01.079 Power Management 00:10:01.079 ================ 00:10:01.079 Number of Power States: 1 00:10:01.079 Current Power State: Power State #0 00:10:01.079 Power State #0: 00:10:01.079 Max Power: 25.00 W 00:10:01.079 Non-Operational State: Operational 00:10:01.079 Entry Latency: 16 microseconds 00:10:01.079 Exit Latency: 4 microseconds 00:10:01.079 Relative Read Throughput: 0 00:10:01.079 Relative Read Latency: 0 00:10:01.079 Relative Write Throughput: 0 00:10:01.079 Relative Write Latency: 0 00:10:01.079 Idle Power: Not Reported 00:10:01.079 Active Power: Not Reported 00:10:01.079 Non-Operational Permissive Mode: Not Supported 00:10:01.079 00:10:01.079 Health Information 00:10:01.079 ================== 00:10:01.079 Critical Warnings: 00:10:01.079 Available Spare Space: OK 00:10:01.079 Temperature: OK 00:10:01.079 Device Reliability: OK 00:10:01.079 Read Only: No 00:10:01.079 Volatile Memory Backup: OK 00:10:01.079 Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.079 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:01.079 Available Spare: 0% 00:10:01.079 Available Spare Threshold: 0% 00:10:01.079 Life Percentage Used: 0% 00:10:01.079 Data Units Read: 1211 00:10:01.079 Data Units Written: 995 00:10:01.079 Host Read Commands: 57715 00:10:01.079 Host Write Commands: 54761 00:10:01.079 Controller Busy Time: 0 minutes 00:10:01.079 Power Cycles: 0 00:10:01.079 Power On Hours: 0 hours 00:10:01.079 Unsafe Shutdowns: 0 00:10:01.079 Unrecoverable Media Errors: 0 00:10:01.079 Lifetime Error Log Entries: 0 00:10:01.080 Warning Temperature Time: 0 minutes 00:10:01.080 Critical Temperature Time: 0 minutes 00:10:01.080 00:10:01.080 Number of Queues 00:10:01.080 ================ 00:10:01.080 Number of I/O Submission Queues: 64 00:10:01.080 Number of I/O Completion Queues: 64 00:10:01.080 00:10:01.080 ZNS Specific Controller Data 00:10:01.080 ============================ 00:10:01.080 Zone Append Size Limit: 0 00:10:01.080 00:10:01.080 00:10:01.080 Active Namespaces 00:10:01.080 ================= 00:10:01.080 Namespace ID:1 00:10:01.080 Error Recovery Timeout: Unlimited 00:10:01.080 Command Set Identifier: NVM (00h) 00:10:01.080 Deallocate: Supported 00:10:01.080 Deallocated/Unwritten Error: Supported 00:10:01.080 Deallocated Read Value: All 0x00 00:10:01.080 Deallocate in Write Zeroes: Not Supported 00:10:01.080 Deallocated Guard Field: 0xFFFF 00:10:01.080 Flush: Supported 00:10:01.080 Reservation: Not Supported 00:10:01.080 Namespace Sharing Capabilities: Private 00:10:01.080 Size (in LBAs): 1310720 (5GiB) 00:10:01.080 Capacity (in LBAs): 1310720 (5GiB) 00:10:01.080 Utilization (in LBAs): 1310720 (5GiB) 00:10:01.080 Thin Provisioning: Not Supported 00:10:01.080 Per-NS Atomic Units: No 00:10:01.080 Maximum Single Source Range Length: 128 00:10:01.080 Maximum Copy Length: 128 00:10:01.080 Maximum Source Range Count: 128 00:10:01.080 NGUID/EUI64 Never Reused: No 00:10:01.080 Namespace Write Protected: No 00:10:01.080 Number of LBA Formats: 8 00:10:01.080 Current LBA Format: LBA Format #04 00:10:01.080 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.080 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.080 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.080 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.080 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.080 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.080 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.080 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.080 00:10:01.080 NVM Specific Namespace Data 00:10:01.080 =========================== 00:10:01.080 Logical Block Storage Tag Mask: 0 00:10:01.080 Protection Information Capabilities: 00:10:01.080 16b Guard Protection Information Storage Tag Support: No 00:10:01.080 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:01.080 Storage Tag Check Read Support: No 00:10:01.080 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.080 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.080 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.080 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.080 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.080 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.080 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.080 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.080 12:03:48 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:01.080 12:03:48 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:01.339 ===================================================== 00:10:01.339 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:01.339 ===================================================== 00:10:01.339 Controller Capabilities/Features 00:10:01.339 ================================ 00:10:01.339 Vendor ID: 1b36 00:10:01.339 Subsystem Vendor ID: 1af4 00:10:01.339 Serial Number: 12342 00:10:01.339 Model Number: QEMU NVMe Ctrl 00:10:01.339 Firmware Version: 8.0.0 00:10:01.339 Recommended Arb Burst: 6 00:10:01.339 IEEE OUI Identifier: 00 54 52 00:10:01.339 Multi-path I/O 00:10:01.339 May have multiple subsystem ports: No 00:10:01.339 May have multiple controllers: No 00:10:01.339 Associated with SR-IOV VF: No 00:10:01.340 Max Data Transfer Size: 524288 00:10:01.340 Max Number of Namespaces: 256 00:10:01.340 Max Number of I/O Queues: 64 00:10:01.340 NVMe Specification Version (VS): 1.4 00:10:01.340 NVMe Specification Version (Identify): 1.4 00:10:01.340 Maximum Queue Entries: 2048 00:10:01.340 Contiguous Queues Required: Yes 00:10:01.340 Arbitration Mechanisms Supported 00:10:01.340 Weighted Round Robin: Not Supported 00:10:01.340 Vendor Specific: Not Supported 00:10:01.340 Reset Timeout: 7500 ms 00:10:01.340 Doorbell Stride: 4 bytes 00:10:01.340 NVM Subsystem Reset: Not Supported 00:10:01.340 Command Sets Supported 00:10:01.340 NVM Command Set: Supported 00:10:01.340 Boot Partition: Not Supported 00:10:01.340 Memory Page Size Minimum: 4096 bytes 00:10:01.340 Memory Page Size Maximum: 65536 bytes 00:10:01.340 Persistent Memory Region: Not Supported 00:10:01.340 Optional Asynchronous Events Supported 00:10:01.340 Namespace Attribute Notices: Supported 00:10:01.340 Firmware Activation Notices: Not Supported 00:10:01.340 ANA Change Notices: Not Supported 00:10:01.340 PLE Aggregate Log Change Notices: Not Supported 00:10:01.340 LBA Status Info Alert Notices: Not Supported 00:10:01.340 EGE Aggregate Log Change Notices: Not Supported 00:10:01.340 Normal NVM Subsystem Shutdown event: Not Supported 00:10:01.340 Zone Descriptor Change Notices: Not Supported 00:10:01.340 Discovery Log Change Notices: Not Supported 00:10:01.340 Controller Attributes 00:10:01.340 128-bit Host Identifier: Not Supported 00:10:01.340 Non-Operational Permissive Mode: Not Supported 00:10:01.340 NVM Sets: Not Supported 00:10:01.340 Read Recovery Levels: Not Supported 00:10:01.340 Endurance Groups: Not Supported 00:10:01.340 Predictable Latency Mode: Not Supported 00:10:01.340 Traffic Based Keep ALive: Not Supported 00:10:01.340 Namespace Granularity: Not Supported 00:10:01.340 SQ Associations: Not Supported 00:10:01.340 UUID List: Not Supported 00:10:01.340 Multi-Domain Subsystem: Not Supported 00:10:01.340 Fixed Capacity Management: Not Supported 00:10:01.340 Variable Capacity Management: Not Supported 00:10:01.340 Delete Endurance Group: Not Supported 00:10:01.340 Delete NVM Set: Not Supported 00:10:01.340 Extended LBA Formats Supported: Supported 00:10:01.340 Flexible Data Placement Supported: Not Supported 00:10:01.340 00:10:01.340 Controller Memory Buffer Support 00:10:01.340 ================================ 00:10:01.340 Supported: No 00:10:01.340 00:10:01.340 Persistent Memory Region Support 00:10:01.340 ================================ 00:10:01.340 Supported: No 00:10:01.340 00:10:01.340 Admin Command Set Attributes 00:10:01.340 ============================ 00:10:01.340 Security Send/Receive: Not Supported 00:10:01.340 Format NVM: Supported 00:10:01.340 Firmware Activate/Download: Not Supported 00:10:01.340 Namespace Management: Supported 00:10:01.340 Device Self-Test: Not Supported 00:10:01.340 Directives: Supported 00:10:01.340 NVMe-MI: Not Supported 00:10:01.340 Virtualization Management: Not Supported 00:10:01.340 Doorbell Buffer Config: Supported 00:10:01.340 Get LBA Status Capability: Not Supported 00:10:01.340 Command & Feature Lockdown Capability: Not Supported 00:10:01.340 Abort Command Limit: 4 00:10:01.340 Async Event Request Limit: 4 00:10:01.340 Number of Firmware Slots: N/A 00:10:01.340 Firmware Slot 1 Read-Only: N/A 00:10:01.340 Firmware Activation Without Reset: N/A 00:10:01.340 Multiple Update Detection Support: N/A 00:10:01.340 Firmware Update Granularity: No Information Provided 00:10:01.340 Per-Namespace SMART Log: Yes 00:10:01.340 Asymmetric Namespace Access Log Page: Not Supported 00:10:01.340 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:01.340 Command Effects Log Page: Supported 00:10:01.340 Get Log Page Extended Data: Supported 00:10:01.340 Telemetry Log Pages: Not Supported 00:10:01.340 Persistent Event Log Pages: Not Supported 00:10:01.340 Supported Log Pages Log Page: May Support 00:10:01.340 Commands Supported & Effects Log Page: Not Supported 00:10:01.340 Feature Identifiers & Effects Log Page:May Support 00:10:01.340 NVMe-MI Commands & Effects Log Page: May Support 00:10:01.340 Data Area 4 for Telemetry Log: Not Supported 00:10:01.340 Error Log Page Entries Supported: 1 00:10:01.340 Keep Alive: Not Supported 00:10:01.340 00:10:01.340 NVM Command Set Attributes 00:10:01.340 ========================== 00:10:01.340 Submission Queue Entry Size 00:10:01.340 Max: 64 00:10:01.340 Min: 64 00:10:01.340 Completion Queue Entry Size 00:10:01.340 Max: 16 00:10:01.340 Min: 16 00:10:01.340 Number of Namespaces: 256 00:10:01.340 Compare Command: Supported 00:10:01.340 Write Uncorrectable Command: Not Supported 00:10:01.340 Dataset Management Command: Supported 00:10:01.340 Write Zeroes Command: Supported 00:10:01.340 Set Features Save Field: Supported 00:10:01.340 Reservations: Not Supported 00:10:01.340 Timestamp: Supported 00:10:01.340 Copy: Supported 00:10:01.340 Volatile Write Cache: Present 00:10:01.340 Atomic Write Unit (Normal): 1 00:10:01.340 Atomic Write Unit (PFail): 1 00:10:01.340 Atomic Compare & Write Unit: 1 00:10:01.340 Fused Compare & Write: Not Supported 00:10:01.340 Scatter-Gather List 00:10:01.340 SGL Command Set: Supported 00:10:01.340 SGL Keyed: Not Supported 00:10:01.340 SGL Bit Bucket Descriptor: Not Supported 00:10:01.340 SGL Metadata Pointer: Not Supported 00:10:01.340 Oversized SGL: Not Supported 00:10:01.340 SGL Metadata Address: Not Supported 00:10:01.340 SGL Offset: Not Supported 00:10:01.340 Transport SGL Data Block: Not Supported 00:10:01.340 Replay Protected Memory Block: Not Supported 00:10:01.340 00:10:01.340 Firmware Slot Information 00:10:01.340 ========================= 00:10:01.340 Active slot: 1 00:10:01.340 Slot 1 Firmware Revision: 1.0 00:10:01.340 00:10:01.340 00:10:01.340 Commands Supported and Effects 00:10:01.340 ============================== 00:10:01.340 Admin Commands 00:10:01.340 -------------- 00:10:01.340 Delete I/O Submission Queue (00h): Supported 00:10:01.340 Create I/O Submission Queue (01h): Supported 00:10:01.340 Get Log Page (02h): Supported 00:10:01.340 Delete I/O Completion Queue (04h): Supported 00:10:01.340 Create I/O Completion Queue (05h): Supported 00:10:01.340 Identify (06h): Supported 00:10:01.340 Abort (08h): Supported 00:10:01.340 Set Features (09h): Supported 00:10:01.340 Get Features (0Ah): Supported 00:10:01.340 Asynchronous Event Request (0Ch): Supported 00:10:01.340 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:01.340 Directive Send (19h): Supported 00:10:01.340 Directive Receive (1Ah): Supported 00:10:01.340 Virtualization Management (1Ch): Supported 00:10:01.340 Doorbell Buffer Config (7Ch): Supported 00:10:01.340 Format NVM (80h): Supported LBA-Change 00:10:01.340 I/O Commands 00:10:01.340 ------------ 00:10:01.340 Flush (00h): Supported LBA-Change 00:10:01.340 Write (01h): Supported LBA-Change 00:10:01.340 Read (02h): Supported 00:10:01.340 Compare (05h): Supported 00:10:01.340 Write Zeroes (08h): Supported LBA-Change 00:10:01.340 Dataset Management (09h): Supported LBA-Change 00:10:01.340 Unknown (0Ch): Supported 00:10:01.340 Unknown (12h): Supported 00:10:01.340 Copy (19h): Supported LBA-Change 00:10:01.340 Unknown (1Dh): Supported LBA-Change 00:10:01.340 00:10:01.340 Error Log 00:10:01.340 ========= 00:10:01.340 00:10:01.340 Arbitration 00:10:01.340 =========== 00:10:01.340 Arbitration Burst: no limit 00:10:01.340 00:10:01.340 Power Management 00:10:01.340 ================ 00:10:01.341 Number of Power States: 1 00:10:01.341 Current Power State: Power State #0 00:10:01.341 Power State #0: 00:10:01.341 Max Power: 25.00 W 00:10:01.341 Non-Operational State: Operational 00:10:01.341 Entry Latency: 16 microseconds 00:10:01.341 Exit Latency: 4 microseconds 00:10:01.341 Relative Read Throughput: 0 00:10:01.341 Relative Read Latency: 0 00:10:01.341 Relative Write Throughput: 0 00:10:01.341 Relative Write Latency: 0 00:10:01.341 Idle Power: Not Reported 00:10:01.341 Active Power: Not Reported 00:10:01.341 Non-Operational Permissive Mode: Not Supported 00:10:01.341 00:10:01.341 Health Information 00:10:01.341 ================== 00:10:01.341 Critical Warnings: 00:10:01.341 Available Spare Space: OK 00:10:01.341 Temperature: OK 00:10:01.341 Device Reliability: OK 00:10:01.341 Read Only: No 00:10:01.341 Volatile Memory Backup: OK 00:10:01.341 Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.341 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:01.341 Available Spare: 0% 00:10:01.341 Available Spare Threshold: 0% 00:10:01.341 Life Percentage Used: 0% 00:10:01.341 Data Units Read: 2495 00:10:01.341 Data Units Written: 2175 00:10:01.341 Host Read Commands: 118475 00:10:01.341 Host Write Commands: 114245 00:10:01.341 Controller Busy Time: 0 minutes 00:10:01.341 Power Cycles: 0 00:10:01.341 Power On Hours: 0 hours 00:10:01.341 Unsafe Shutdowns: 0 00:10:01.341 Unrecoverable Media Errors: 0 00:10:01.341 Lifetime Error Log Entries: 0 00:10:01.341 Warning Temperature Time: 0 minutes 00:10:01.341 Critical Temperature Time: 0 minutes 00:10:01.341 00:10:01.341 Number of Queues 00:10:01.341 ================ 00:10:01.341 Number of I/O Submission Queues: 64 00:10:01.341 Number of I/O Completion Queues: 64 00:10:01.341 00:10:01.341 ZNS Specific Controller Data 00:10:01.341 ============================ 00:10:01.341 Zone Append Size Limit: 0 00:10:01.341 00:10:01.341 00:10:01.341 Active Namespaces 00:10:01.341 ================= 00:10:01.341 Namespace ID:1 00:10:01.341 Error Recovery Timeout: Unlimited 00:10:01.341 Command Set Identifier: NVM (00h) 00:10:01.341 Deallocate: Supported 00:10:01.341 Deallocated/Unwritten Error: Supported 00:10:01.341 Deallocated Read Value: All 0x00 00:10:01.341 Deallocate in Write Zeroes: Not Supported 00:10:01.341 Deallocated Guard Field: 0xFFFF 00:10:01.341 Flush: Supported 00:10:01.341 Reservation: Not Supported 00:10:01.341 Namespace Sharing Capabilities: Private 00:10:01.341 Size (in LBAs): 1048576 (4GiB) 00:10:01.341 Capacity (in LBAs): 1048576 (4GiB) 00:10:01.341 Utilization (in LBAs): 1048576 (4GiB) 00:10:01.341 Thin Provisioning: Not Supported 00:10:01.341 Per-NS Atomic Units: No 00:10:01.341 Maximum Single Source Range Length: 128 00:10:01.341 Maximum Copy Length: 128 00:10:01.341 Maximum Source Range Count: 128 00:10:01.341 NGUID/EUI64 Never Reused: No 00:10:01.341 Namespace Write Protected: No 00:10:01.341 Number of LBA Formats: 8 00:10:01.341 Current LBA Format: LBA Format #04 00:10:01.341 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.341 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.341 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.341 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.341 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.341 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.341 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.341 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.341 00:10:01.341 NVM Specific Namespace Data 00:10:01.341 =========================== 00:10:01.341 Logical Block Storage Tag Mask: 0 00:10:01.341 Protection Information Capabilities: 00:10:01.341 16b Guard Protection Information Storage Tag Support: No 00:10:01.341 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:01.341 Storage Tag Check Read Support: No 00:10:01.341 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Namespace ID:2 00:10:01.341 Error Recovery Timeout: Unlimited 00:10:01.341 Command Set Identifier: NVM (00h) 00:10:01.341 Deallocate: Supported 00:10:01.341 Deallocated/Unwritten Error: Supported 00:10:01.341 Deallocated Read Value: All 0x00 00:10:01.341 Deallocate in Write Zeroes: Not Supported 00:10:01.341 Deallocated Guard Field: 0xFFFF 00:10:01.341 Flush: Supported 00:10:01.341 Reservation: Not Supported 00:10:01.341 Namespace Sharing Capabilities: Private 00:10:01.341 Size (in LBAs): 1048576 (4GiB) 00:10:01.341 Capacity (in LBAs): 1048576 (4GiB) 00:10:01.341 Utilization (in LBAs): 1048576 (4GiB) 00:10:01.341 Thin Provisioning: Not Supported 00:10:01.341 Per-NS Atomic Units: No 00:10:01.341 Maximum Single Source Range Length: 128 00:10:01.341 Maximum Copy Length: 128 00:10:01.341 Maximum Source Range Count: 128 00:10:01.341 NGUID/EUI64 Never Reused: No 00:10:01.341 Namespace Write Protected: No 00:10:01.341 Number of LBA Formats: 8 00:10:01.341 Current LBA Format: LBA Format #04 00:10:01.341 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.341 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.341 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.341 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.341 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.341 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.341 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.341 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.341 00:10:01.341 NVM Specific Namespace Data 00:10:01.341 =========================== 00:10:01.341 Logical Block Storage Tag Mask: 0 00:10:01.341 Protection Information Capabilities: 00:10:01.341 16b Guard Protection Information Storage Tag Support: No 00:10:01.341 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:01.341 Storage Tag Check Read Support: No 00:10:01.341 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.341 Namespace ID:3 00:10:01.341 Error Recovery Timeout: Unlimited 00:10:01.341 Command Set Identifier: NVM (00h) 00:10:01.341 Deallocate: Supported 00:10:01.341 Deallocated/Unwritten Error: Supported 00:10:01.341 Deallocated Read Value: All 0x00 00:10:01.341 Deallocate in Write Zeroes: Not Supported 00:10:01.341 Deallocated Guard Field: 0xFFFF 00:10:01.341 Flush: Supported 00:10:01.341 Reservation: Not Supported 00:10:01.341 Namespace Sharing Capabilities: Private 00:10:01.341 Size (in LBAs): 1048576 (4GiB) 00:10:01.341 Capacity (in LBAs): 1048576 (4GiB) 00:10:01.341 Utilization (in LBAs): 1048576 (4GiB) 00:10:01.342 Thin Provisioning: Not Supported 00:10:01.342 Per-NS Atomic Units: No 00:10:01.342 Maximum Single Source Range Length: 128 00:10:01.342 Maximum Copy Length: 128 00:10:01.342 Maximum Source Range Count: 128 00:10:01.342 NGUID/EUI64 Never Reused: No 00:10:01.342 Namespace Write Protected: No 00:10:01.342 Number of LBA Formats: 8 00:10:01.342 Current LBA Format: LBA Format #04 00:10:01.342 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.342 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.342 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.342 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.342 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.342 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.342 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.342 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.342 00:10:01.342 NVM Specific Namespace Data 00:10:01.342 =========================== 00:10:01.342 Logical Block Storage Tag Mask: 0 00:10:01.342 Protection Information Capabilities: 00:10:01.342 16b Guard Protection Information Storage Tag Support: No 00:10:01.342 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:01.342 Storage Tag Check Read Support: No 00:10:01.342 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.342 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.342 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.342 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.342 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.342 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.342 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.342 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.342 12:03:49 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:01.342 12:03:49 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:01.602 ===================================================== 00:10:01.602 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:01.602 ===================================================== 00:10:01.602 Controller Capabilities/Features 00:10:01.602 ================================ 00:10:01.602 Vendor ID: 1b36 00:10:01.602 Subsystem Vendor ID: 1af4 00:10:01.602 Serial Number: 12343 00:10:01.602 Model Number: QEMU NVMe Ctrl 00:10:01.602 Firmware Version: 8.0.0 00:10:01.602 Recommended Arb Burst: 6 00:10:01.602 IEEE OUI Identifier: 00 54 52 00:10:01.602 Multi-path I/O 00:10:01.602 May have multiple subsystem ports: No 00:10:01.602 May have multiple controllers: Yes 00:10:01.602 Associated with SR-IOV VF: No 00:10:01.602 Max Data Transfer Size: 524288 00:10:01.602 Max Number of Namespaces: 256 00:10:01.602 Max Number of I/O Queues: 64 00:10:01.602 NVMe Specification Version (VS): 1.4 00:10:01.602 NVMe Specification Version (Identify): 1.4 00:10:01.602 Maximum Queue Entries: 2048 00:10:01.602 Contiguous Queues Required: Yes 00:10:01.602 Arbitration Mechanisms Supported 00:10:01.602 Weighted Round Robin: Not Supported 00:10:01.602 Vendor Specific: Not Supported 00:10:01.602 Reset Timeout: 7500 ms 00:10:01.602 Doorbell Stride: 4 bytes 00:10:01.602 NVM Subsystem Reset: Not Supported 00:10:01.602 Command Sets Supported 00:10:01.602 NVM Command Set: Supported 00:10:01.602 Boot Partition: Not Supported 00:10:01.602 Memory Page Size Minimum: 4096 bytes 00:10:01.602 Memory Page Size Maximum: 65536 bytes 00:10:01.602 Persistent Memory Region: Not Supported 00:10:01.602 Optional Asynchronous Events Supported 00:10:01.602 Namespace Attribute Notices: Supported 00:10:01.602 Firmware Activation Notices: Not Supported 00:10:01.602 ANA Change Notices: Not Supported 00:10:01.602 PLE Aggregate Log Change Notices: Not Supported 00:10:01.602 LBA Status Info Alert Notices: Not Supported 00:10:01.602 EGE Aggregate Log Change Notices: Not Supported 00:10:01.602 Normal NVM Subsystem Shutdown event: Not Supported 00:10:01.602 Zone Descriptor Change Notices: Not Supported 00:10:01.602 Discovery Log Change Notices: Not Supported 00:10:01.602 Controller Attributes 00:10:01.602 128-bit Host Identifier: Not Supported 00:10:01.602 Non-Operational Permissive Mode: Not Supported 00:10:01.602 NVM Sets: Not Supported 00:10:01.602 Read Recovery Levels: Not Supported 00:10:01.602 Endurance Groups: Supported 00:10:01.602 Predictable Latency Mode: Not Supported 00:10:01.602 Traffic Based Keep ALive: Not Supported 00:10:01.602 Namespace Granularity: Not Supported 00:10:01.602 SQ Associations: Not Supported 00:10:01.602 UUID List: Not Supported 00:10:01.602 Multi-Domain Subsystem: Not Supported 00:10:01.602 Fixed Capacity Management: Not Supported 00:10:01.602 Variable Capacity Management: Not Supported 00:10:01.602 Delete Endurance Group: Not Supported 00:10:01.602 Delete NVM Set: Not Supported 00:10:01.602 Extended LBA Formats Supported: Supported 00:10:01.602 Flexible Data Placement Supported: Supported 00:10:01.602 00:10:01.602 Controller Memory Buffer Support 00:10:01.602 ================================ 00:10:01.602 Supported: No 00:10:01.602 00:10:01.602 Persistent Memory Region Support 00:10:01.602 ================================ 00:10:01.602 Supported: No 00:10:01.602 00:10:01.602 Admin Command Set Attributes 00:10:01.602 ============================ 00:10:01.603 Security Send/Receive: Not Supported 00:10:01.603 Format NVM: Supported 00:10:01.603 Firmware Activate/Download: Not Supported 00:10:01.603 Namespace Management: Supported 00:10:01.603 Device Self-Test: Not Supported 00:10:01.603 Directives: Supported 00:10:01.603 NVMe-MI: Not Supported 00:10:01.603 Virtualization Management: Not Supported 00:10:01.603 Doorbell Buffer Config: Supported 00:10:01.603 Get LBA Status Capability: Not Supported 00:10:01.603 Command & Feature Lockdown Capability: Not Supported 00:10:01.603 Abort Command Limit: 4 00:10:01.603 Async Event Request Limit: 4 00:10:01.603 Number of Firmware Slots: N/A 00:10:01.603 Firmware Slot 1 Read-Only: N/A 00:10:01.603 Firmware Activation Without Reset: N/A 00:10:01.603 Multiple Update Detection Support: N/A 00:10:01.603 Firmware Update Granularity: No Information Provided 00:10:01.603 Per-Namespace SMART Log: Yes 00:10:01.603 Asymmetric Namespace Access Log Page: Not Supported 00:10:01.603 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:01.603 Command Effects Log Page: Supported 00:10:01.603 Get Log Page Extended Data: Supported 00:10:01.603 Telemetry Log Pages: Not Supported 00:10:01.603 Persistent Event Log Pages: Not Supported 00:10:01.603 Supported Log Pages Log Page: May Support 00:10:01.603 Commands Supported & Effects Log Page: Not Supported 00:10:01.603 Feature Identifiers & Effects Log Page:May Support 00:10:01.603 NVMe-MI Commands & Effects Log Page: May Support 00:10:01.603 Data Area 4 for Telemetry Log: Not Supported 00:10:01.603 Error Log Page Entries Supported: 1 00:10:01.603 Keep Alive: Not Supported 00:10:01.603 00:10:01.603 NVM Command Set Attributes 00:10:01.603 ========================== 00:10:01.603 Submission Queue Entry Size 00:10:01.603 Max: 64 00:10:01.603 Min: 64 00:10:01.603 Completion Queue Entry Size 00:10:01.603 Max: 16 00:10:01.603 Min: 16 00:10:01.603 Number of Namespaces: 256 00:10:01.603 Compare Command: Supported 00:10:01.603 Write Uncorrectable Command: Not Supported 00:10:01.603 Dataset Management Command: Supported 00:10:01.603 Write Zeroes Command: Supported 00:10:01.603 Set Features Save Field: Supported 00:10:01.603 Reservations: Not Supported 00:10:01.603 Timestamp: Supported 00:10:01.603 Copy: Supported 00:10:01.603 Volatile Write Cache: Present 00:10:01.603 Atomic Write Unit (Normal): 1 00:10:01.603 Atomic Write Unit (PFail): 1 00:10:01.603 Atomic Compare & Write Unit: 1 00:10:01.603 Fused Compare & Write: Not Supported 00:10:01.603 Scatter-Gather List 00:10:01.603 SGL Command Set: Supported 00:10:01.603 SGL Keyed: Not Supported 00:10:01.603 SGL Bit Bucket Descriptor: Not Supported 00:10:01.603 SGL Metadata Pointer: Not Supported 00:10:01.603 Oversized SGL: Not Supported 00:10:01.603 SGL Metadata Address: Not Supported 00:10:01.603 SGL Offset: Not Supported 00:10:01.603 Transport SGL Data Block: Not Supported 00:10:01.603 Replay Protected Memory Block: Not Supported 00:10:01.603 00:10:01.603 Firmware Slot Information 00:10:01.603 ========================= 00:10:01.603 Active slot: 1 00:10:01.603 Slot 1 Firmware Revision: 1.0 00:10:01.603 00:10:01.603 00:10:01.603 Commands Supported and Effects 00:10:01.603 ============================== 00:10:01.603 Admin Commands 00:10:01.603 -------------- 00:10:01.603 Delete I/O Submission Queue (00h): Supported 00:10:01.603 Create I/O Submission Queue (01h): Supported 00:10:01.603 Get Log Page (02h): Supported 00:10:01.603 Delete I/O Completion Queue (04h): Supported 00:10:01.603 Create I/O Completion Queue (05h): Supported 00:10:01.603 Identify (06h): Supported 00:10:01.603 Abort (08h): Supported 00:10:01.603 Set Features (09h): Supported 00:10:01.603 Get Features (0Ah): Supported 00:10:01.603 Asynchronous Event Request (0Ch): Supported 00:10:01.603 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:01.603 Directive Send (19h): Supported 00:10:01.603 Directive Receive (1Ah): Supported 00:10:01.603 Virtualization Management (1Ch): Supported 00:10:01.603 Doorbell Buffer Config (7Ch): Supported 00:10:01.603 Format NVM (80h): Supported LBA-Change 00:10:01.603 I/O Commands 00:10:01.603 ------------ 00:10:01.603 Flush (00h): Supported LBA-Change 00:10:01.603 Write (01h): Supported LBA-Change 00:10:01.603 Read (02h): Supported 00:10:01.603 Compare (05h): Supported 00:10:01.603 Write Zeroes (08h): Supported LBA-Change 00:10:01.603 Dataset Management (09h): Supported LBA-Change 00:10:01.603 Unknown (0Ch): Supported 00:10:01.603 Unknown (12h): Supported 00:10:01.603 Copy (19h): Supported LBA-Change 00:10:01.603 Unknown (1Dh): Supported LBA-Change 00:10:01.603 00:10:01.603 Error Log 00:10:01.603 ========= 00:10:01.603 00:10:01.603 Arbitration 00:10:01.603 =========== 00:10:01.603 Arbitration Burst: no limit 00:10:01.603 00:10:01.603 Power Management 00:10:01.603 ================ 00:10:01.603 Number of Power States: 1 00:10:01.603 Current Power State: Power State #0 00:10:01.603 Power State #0: 00:10:01.603 Max Power: 25.00 W 00:10:01.603 Non-Operational State: Operational 00:10:01.603 Entry Latency: 16 microseconds 00:10:01.603 Exit Latency: 4 microseconds 00:10:01.603 Relative Read Throughput: 0 00:10:01.603 Relative Read Latency: 0 00:10:01.603 Relative Write Throughput: 0 00:10:01.603 Relative Write Latency: 0 00:10:01.603 Idle Power: Not Reported 00:10:01.603 Active Power: Not Reported 00:10:01.603 Non-Operational Permissive Mode: Not Supported 00:10:01.603 00:10:01.603 Health Information 00:10:01.603 ================== 00:10:01.603 Critical Warnings: 00:10:01.603 Available Spare Space: OK 00:10:01.603 Temperature: OK 00:10:01.603 Device Reliability: OK 00:10:01.603 Read Only: No 00:10:01.603 Volatile Memory Backup: OK 00:10:01.603 Current Temperature: 323 Kelvin (50 Celsius) 00:10:01.603 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:01.603 Available Spare: 0% 00:10:01.603 Available Spare Threshold: 0% 00:10:01.603 Life Percentage Used: 0% 00:10:01.603 Data Units Read: 913 00:10:01.603 Data Units Written: 806 00:10:01.603 Host Read Commands: 40225 00:10:01.603 Host Write Commands: 38815 00:10:01.603 Controller Busy Time: 0 minutes 00:10:01.603 Power Cycles: 0 00:10:01.603 Power On Hours: 0 hours 00:10:01.603 Unsafe Shutdowns: 0 00:10:01.603 Unrecoverable Media Errors: 0 00:10:01.603 Lifetime Error Log Entries: 0 00:10:01.603 Warning Temperature Time: 0 minutes 00:10:01.603 Critical Temperature Time: 0 minutes 00:10:01.603 00:10:01.603 Number of Queues 00:10:01.603 ================ 00:10:01.603 Number of I/O Submission Queues: 64 00:10:01.603 Number of I/O Completion Queues: 64 00:10:01.603 00:10:01.603 ZNS Specific Controller Data 00:10:01.603 ============================ 00:10:01.603 Zone Append Size Limit: 0 00:10:01.603 00:10:01.603 00:10:01.603 Active Namespaces 00:10:01.603 ================= 00:10:01.603 Namespace ID:1 00:10:01.603 Error Recovery Timeout: Unlimited 00:10:01.603 Command Set Identifier: NVM (00h) 00:10:01.603 Deallocate: Supported 00:10:01.603 Deallocated/Unwritten Error: Supported 00:10:01.603 Deallocated Read Value: All 0x00 00:10:01.603 Deallocate in Write Zeroes: Not Supported 00:10:01.603 Deallocated Guard Field: 0xFFFF 00:10:01.604 Flush: Supported 00:10:01.604 Reservation: Not Supported 00:10:01.604 Namespace Sharing Capabilities: Multiple Controllers 00:10:01.604 Size (in LBAs): 262144 (1GiB) 00:10:01.604 Capacity (in LBAs): 262144 (1GiB) 00:10:01.604 Utilization (in LBAs): 262144 (1GiB) 00:10:01.604 Thin Provisioning: Not Supported 00:10:01.604 Per-NS Atomic Units: No 00:10:01.604 Maximum Single Source Range Length: 128 00:10:01.604 Maximum Copy Length: 128 00:10:01.604 Maximum Source Range Count: 128 00:10:01.604 NGUID/EUI64 Never Reused: No 00:10:01.604 Namespace Write Protected: No 00:10:01.604 Endurance group ID: 1 00:10:01.604 Number of LBA Formats: 8 00:10:01.604 Current LBA Format: LBA Format #04 00:10:01.604 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:01.604 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:01.604 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:01.604 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:01.604 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:01.604 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:01.604 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:01.604 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:01.604 00:10:01.604 Get Feature FDP: 00:10:01.604 ================ 00:10:01.604 Enabled: Yes 00:10:01.604 FDP configuration index: 0 00:10:01.604 00:10:01.604 FDP configurations log page 00:10:01.604 =========================== 00:10:01.604 Number of FDP configurations: 1 00:10:01.604 Version: 0 00:10:01.604 Size: 112 00:10:01.604 FDP Configuration Descriptor: 0 00:10:01.604 Descriptor Size: 96 00:10:01.604 Reclaim Group Identifier format: 2 00:10:01.604 FDP Volatile Write Cache: Not Present 00:10:01.604 FDP Configuration: Valid 00:10:01.604 Vendor Specific Size: 0 00:10:01.604 Number of Reclaim Groups: 2 00:10:01.604 Number of Recalim Unit Handles: 8 00:10:01.604 Max Placement Identifiers: 128 00:10:01.604 Number of Namespaces Suppprted: 256 00:10:01.604 Reclaim unit Nominal Size: 6000000 bytes 00:10:01.604 Estimated Reclaim Unit Time Limit: Not Reported 00:10:01.604 RUH Desc #000: RUH Type: Initially Isolated 00:10:01.604 RUH Desc #001: RUH Type: Initially Isolated 00:10:01.604 RUH Desc #002: RUH Type: Initially Isolated 00:10:01.604 RUH Desc #003: RUH Type: Initially Isolated 00:10:01.604 RUH Desc #004: RUH Type: Initially Isolated 00:10:01.604 RUH Desc #005: RUH Type: Initially Isolated 00:10:01.604 RUH Desc #006: RUH Type: Initially Isolated 00:10:01.604 RUH Desc #007: RUH Type: Initially Isolated 00:10:01.604 00:10:01.604 FDP reclaim unit handle usage log page 00:10:01.863 ====================================== 00:10:01.863 Number of Reclaim Unit Handles: 8 00:10:01.863 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:01.863 RUH Usage Desc #001: RUH Attributes: Unused 00:10:01.863 RUH Usage Desc #002: RUH Attributes: Unused 00:10:01.863 RUH Usage Desc #003: RUH Attributes: Unused 00:10:01.864 RUH Usage Desc #004: RUH Attributes: Unused 00:10:01.864 RUH Usage Desc #005: RUH Attributes: Unused 00:10:01.864 RUH Usage Desc #006: RUH Attributes: Unused 00:10:01.864 RUH Usage Desc #007: RUH Attributes: Unused 00:10:01.864 00:10:01.864 FDP statistics log page 00:10:01.864 ======================= 00:10:01.864 Host bytes with metadata written: 500998144 00:10:01.864 Media bytes with metadata written: 501051392 00:10:01.864 Media bytes erased: 0 00:10:01.864 00:10:01.864 FDP events log page 00:10:01.864 =================== 00:10:01.864 Number of FDP events: 0 00:10:01.864 00:10:01.864 NVM Specific Namespace Data 00:10:01.864 =========================== 00:10:01.864 Logical Block Storage Tag Mask: 0 00:10:01.864 Protection Information Capabilities: 00:10:01.864 16b Guard Protection Information Storage Tag Support: No 00:10:01.864 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:01.864 Storage Tag Check Read Support: No 00:10:01.864 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.864 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.864 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.864 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.864 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.864 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.864 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.864 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:01.864 00:10:01.864 real 0m1.644s 00:10:01.864 user 0m0.605s 00:10:01.864 sys 0m0.809s 00:10:01.864 ************************************ 00:10:01.864 END TEST nvme_identify 00:10:01.864 ************************************ 00:10:01.864 12:03:49 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.864 12:03:49 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:01.864 12:03:49 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:01.864 12:03:49 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:01.864 12:03:49 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.864 12:03:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:01.864 ************************************ 00:10:01.864 START TEST nvme_perf 00:10:01.864 ************************************ 00:10:01.864 12:03:49 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:10:01.864 12:03:49 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:03.257 Initializing NVMe Controllers 00:10:03.257 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:03.257 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:03.257 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:03.257 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:03.257 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:03.257 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:03.257 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:03.257 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:03.257 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:03.257 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:03.257 Initialization complete. Launching workers. 00:10:03.257 ======================================================== 00:10:03.257 Latency(us) 00:10:03.257 Device Information : IOPS MiB/s Average min max 00:10:03.257 PCIE (0000:00:10.0) NSID 1 from core 0: 13630.16 159.73 9411.37 7943.95 41550.61 00:10:03.257 PCIE (0000:00:11.0) NSID 1 from core 0: 13630.16 159.73 9394.05 8047.24 39419.11 00:10:03.257 PCIE (0000:00:13.0) NSID 1 from core 0: 13630.16 159.73 9374.43 8057.11 37755.32 00:10:03.257 PCIE (0000:00:12.0) NSID 1 from core 0: 13630.16 159.73 9355.30 8058.98 35554.93 00:10:03.257 PCIE (0000:00:12.0) NSID 2 from core 0: 13630.16 159.73 9335.81 8040.97 33351.97 00:10:03.257 PCIE (0000:00:12.0) NSID 3 from core 0: 13694.15 160.48 9273.32 8048.16 25643.13 00:10:03.257 ======================================================== 00:10:03.257 Total : 81844.95 959.12 9357.32 7943.95 41550.61 00:10:03.257 00:10:03.257 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:03.257 ================================================================================= 00:10:03.257 1.00000% : 8159.100us 00:10:03.257 10.00000% : 8422.297us 00:10:03.257 25.00000% : 8632.855us 00:10:03.257 50.00000% : 8948.691us 00:10:03.257 75.00000% : 9264.527us 00:10:03.257 90.00000% : 9948.839us 00:10:03.257 95.00000% : 11264.822us 00:10:03.257 98.00000% : 14949.578us 00:10:03.257 99.00000% : 19055.447us 00:10:03.257 99.50000% : 34531.418us 00:10:03.257 99.90000% : 41269.256us 00:10:03.257 99.99000% : 41690.371us 00:10:03.257 99.99900% : 41690.371us 00:10:03.257 99.99990% : 41690.371us 00:10:03.257 99.99999% : 41690.371us 00:10:03.257 00:10:03.257 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:03.257 ================================================================================= 00:10:03.257 1.00000% : 8264.379us 00:10:03.257 10.00000% : 8474.937us 00:10:03.257 25.00000% : 8685.494us 00:10:03.257 50.00000% : 8948.691us 00:10:03.257 75.00000% : 9211.888us 00:10:03.257 90.00000% : 9896.199us 00:10:03.257 95.00000% : 11264.822us 00:10:03.257 98.00000% : 14423.184us 00:10:03.257 99.00000% : 18950.169us 00:10:03.257 99.50000% : 32425.844us 00:10:03.257 99.90000% : 39163.682us 00:10:03.257 99.99000% : 39584.797us 00:10:03.257 99.99900% : 39584.797us 00:10:03.257 99.99990% : 39584.797us 00:10:03.257 99.99999% : 39584.797us 00:10:03.257 00:10:03.257 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:03.257 ================================================================================= 00:10:03.257 1.00000% : 8264.379us 00:10:03.257 10.00000% : 8474.937us 00:10:03.257 25.00000% : 8685.494us 00:10:03.257 50.00000% : 8948.691us 00:10:03.257 75.00000% : 9211.888us 00:10:03.257 90.00000% : 9896.199us 00:10:03.257 95.00000% : 11212.183us 00:10:03.257 98.00000% : 15160.135us 00:10:03.257 99.00000% : 19266.005us 00:10:03.257 99.50000% : 30741.385us 00:10:03.257 99.90000% : 37479.222us 00:10:03.257 99.99000% : 37900.337us 00:10:03.257 99.99900% : 37900.337us 00:10:03.257 99.99990% : 37900.337us 00:10:03.257 99.99999% : 37900.337us 00:10:03.257 00:10:03.257 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:03.257 ================================================================================= 00:10:03.257 1.00000% : 8211.740us 00:10:03.257 10.00000% : 8474.937us 00:10:03.258 25.00000% : 8685.494us 00:10:03.258 50.00000% : 8948.691us 00:10:03.258 75.00000% : 9211.888us 00:10:03.258 90.00000% : 9896.199us 00:10:03.258 95.00000% : 11212.183us 00:10:03.258 98.00000% : 15791.807us 00:10:03.258 99.00000% : 19581.841us 00:10:03.258 99.50000% : 28635.810us 00:10:03.258 99.90000% : 35373.648us 00:10:03.258 99.99000% : 35584.206us 00:10:03.258 99.99900% : 35584.206us 00:10:03.258 99.99990% : 35584.206us 00:10:03.258 99.99999% : 35584.206us 00:10:03.258 00:10:03.258 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:03.258 ================================================================================= 00:10:03.258 1.00000% : 8211.740us 00:10:03.258 10.00000% : 8474.937us 00:10:03.258 25.00000% : 8685.494us 00:10:03.258 50.00000% : 8948.691us 00:10:03.258 75.00000% : 9211.888us 00:10:03.258 90.00000% : 9896.199us 00:10:03.258 95.00000% : 11264.822us 00:10:03.258 98.00000% : 16002.365us 00:10:03.258 99.00000% : 19476.562us 00:10:03.258 99.50000% : 26214.400us 00:10:03.258 99.90000% : 33057.516us 00:10:03.258 99.99000% : 33478.631us 00:10:03.258 99.99900% : 33478.631us 00:10:03.258 99.99990% : 33478.631us 00:10:03.258 99.99999% : 33478.631us 00:10:03.258 00:10:03.258 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:03.258 ================================================================================= 00:10:03.258 1.00000% : 8264.379us 00:10:03.258 10.00000% : 8474.937us 00:10:03.258 25.00000% : 8685.494us 00:10:03.258 50.00000% : 8948.691us 00:10:03.258 75.00000% : 9211.888us 00:10:03.258 90.00000% : 9948.839us 00:10:03.258 95.00000% : 11317.462us 00:10:03.258 98.00000% : 15475.971us 00:10:03.258 99.00000% : 18318.496us 00:10:03.258 99.50000% : 19266.005us 00:10:03.258 99.90000% : 25266.892us 00:10:03.258 99.99000% : 25688.006us 00:10:03.258 99.99900% : 25688.006us 00:10:03.258 99.99990% : 25688.006us 00:10:03.258 99.99999% : 25688.006us 00:10:03.258 00:10:03.258 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:03.258 ============================================================================== 00:10:03.258 Range in us Cumulative IO count 00:10:03.258 7895.904 - 7948.543: 0.0147% ( 2) 00:10:03.258 7948.543 - 8001.182: 0.0734% ( 8) 00:10:03.258 8001.182 - 8053.822: 0.1907% ( 16) 00:10:03.258 8053.822 - 8106.461: 0.6089% ( 57) 00:10:03.258 8106.461 - 8159.100: 1.4011% ( 108) 00:10:03.258 8159.100 - 8211.740: 2.6555% ( 171) 00:10:03.258 8211.740 - 8264.379: 4.3501% ( 231) 00:10:03.258 8264.379 - 8317.018: 6.5288% ( 297) 00:10:03.258 8317.018 - 8369.658: 9.1256% ( 354) 00:10:03.258 8369.658 - 8422.297: 12.2139% ( 421) 00:10:03.258 8422.297 - 8474.937: 15.5737% ( 458) 00:10:03.258 8474.937 - 8527.576: 18.9774% ( 464) 00:10:03.258 8527.576 - 8580.215: 22.9460% ( 541) 00:10:03.258 8580.215 - 8632.855: 26.7899% ( 524) 00:10:03.258 8632.855 - 8685.494: 30.8172% ( 549) 00:10:03.258 8685.494 - 8738.133: 35.0646% ( 579) 00:10:03.258 8738.133 - 8790.773: 39.4733% ( 601) 00:10:03.258 8790.773 - 8843.412: 43.9701% ( 613) 00:10:03.258 8843.412 - 8896.051: 48.2908% ( 589) 00:10:03.258 8896.051 - 8948.691: 52.6262% ( 591) 00:10:03.258 8948.691 - 9001.330: 57.0496% ( 603) 00:10:03.258 9001.330 - 9053.969: 61.3483% ( 586) 00:10:03.258 9053.969 - 9106.609: 65.5590% ( 574) 00:10:03.258 9106.609 - 9159.248: 69.3589% ( 518) 00:10:03.258 9159.248 - 9211.888: 72.9900% ( 495) 00:10:03.258 9211.888 - 9264.527: 76.0637% ( 419) 00:10:03.258 9264.527 - 9317.166: 78.5138% ( 334) 00:10:03.258 9317.166 - 9369.806: 80.7732% ( 308) 00:10:03.258 9369.806 - 9422.445: 82.6364% ( 254) 00:10:03.258 9422.445 - 9475.084: 84.2723% ( 223) 00:10:03.258 9475.084 - 9527.724: 85.5781% ( 178) 00:10:03.258 9527.724 - 9580.363: 86.6050% ( 140) 00:10:03.258 9580.363 - 9633.002: 87.4340% ( 113) 00:10:03.258 9633.002 - 9685.642: 88.1602% ( 99) 00:10:03.258 9685.642 - 9738.281: 88.7324% ( 78) 00:10:03.258 9738.281 - 9790.920: 89.1212% ( 53) 00:10:03.258 9790.920 - 9843.560: 89.4806% ( 49) 00:10:03.258 9843.560 - 9896.199: 89.8694% ( 53) 00:10:03.258 9896.199 - 9948.839: 90.1482% ( 38) 00:10:03.258 9948.839 - 10001.478: 90.3903% ( 33) 00:10:03.258 10001.478 - 10054.117: 90.6103% ( 30) 00:10:03.258 10054.117 - 10106.757: 90.8671% ( 35) 00:10:03.258 10106.757 - 10159.396: 91.1532% ( 39) 00:10:03.258 10159.396 - 10212.035: 91.3586% ( 28) 00:10:03.258 10212.035 - 10264.675: 91.5860% ( 31) 00:10:03.258 10264.675 - 10317.314: 91.7620% ( 24) 00:10:03.258 10317.314 - 10369.953: 91.9161% ( 21) 00:10:03.258 10369.953 - 10422.593: 92.0114% ( 13) 00:10:03.258 10422.593 - 10475.232: 92.0921% ( 11) 00:10:03.258 10475.232 - 10527.871: 92.2242% ( 18) 00:10:03.258 10527.871 - 10580.511: 92.3782% ( 21) 00:10:03.258 10580.511 - 10633.150: 92.5910% ( 29) 00:10:03.258 10633.150 - 10685.790: 92.7744% ( 25) 00:10:03.258 10685.790 - 10738.429: 92.9798% ( 28) 00:10:03.258 10738.429 - 10791.068: 93.1778% ( 27) 00:10:03.258 10791.068 - 10843.708: 93.3685% ( 26) 00:10:03.258 10843.708 - 10896.347: 93.5813% ( 29) 00:10:03.258 10896.347 - 10948.986: 93.8013% ( 30) 00:10:03.258 10948.986 - 11001.626: 94.0141% ( 29) 00:10:03.258 11001.626 - 11054.265: 94.2268% ( 29) 00:10:03.258 11054.265 - 11106.904: 94.4249% ( 27) 00:10:03.258 11106.904 - 11159.544: 94.6303% ( 28) 00:10:03.258 11159.544 - 11212.183: 94.8504% ( 30) 00:10:03.258 11212.183 - 11264.822: 95.0044% ( 21) 00:10:03.258 11264.822 - 11317.462: 95.1951% ( 26) 00:10:03.258 11317.462 - 11370.101: 95.3932% ( 27) 00:10:03.258 11370.101 - 11422.741: 95.5619% ( 23) 00:10:03.258 11422.741 - 11475.380: 95.7160% ( 21) 00:10:03.258 11475.380 - 11528.019: 95.8920% ( 24) 00:10:03.258 11528.019 - 11580.659: 96.0901% ( 27) 00:10:03.258 11580.659 - 11633.298: 96.2881% ( 27) 00:10:03.258 11633.298 - 11685.937: 96.4202% ( 18) 00:10:03.258 11685.937 - 11738.577: 96.5156% ( 13) 00:10:03.258 11738.577 - 11791.216: 96.5669% ( 7) 00:10:03.258 11791.216 - 11843.855: 96.5962% ( 4) 00:10:03.258 11843.855 - 11896.495: 96.6329% ( 5) 00:10:03.258 11896.495 - 11949.134: 96.6476% ( 2) 00:10:03.258 11949.134 - 12001.773: 96.6843% ( 5) 00:10:03.258 12001.773 - 12054.413: 96.7136% ( 4) 00:10:03.258 12054.413 - 12107.052: 96.7723% ( 8) 00:10:03.258 12107.052 - 12159.692: 96.8090% ( 5) 00:10:03.258 12159.692 - 12212.331: 96.8530% ( 6) 00:10:03.258 12212.331 - 12264.970: 96.9117% ( 8) 00:10:03.258 12264.970 - 12317.610: 96.9484% ( 5) 00:10:03.258 12317.610 - 12370.249: 96.9997% ( 7) 00:10:03.258 12370.249 - 12422.888: 97.0511% ( 7) 00:10:03.258 12422.888 - 12475.528: 97.0877% ( 5) 00:10:03.258 12475.528 - 12528.167: 97.1171% ( 4) 00:10:03.258 12528.167 - 12580.806: 97.1464% ( 4) 00:10:03.258 12580.806 - 12633.446: 97.1684% ( 3) 00:10:03.258 12633.446 - 12686.085: 97.2051% ( 5) 00:10:03.258 12686.085 - 12738.724: 97.2418% ( 5) 00:10:03.258 12738.724 - 12791.364: 97.2711% ( 4) 00:10:03.258 12791.364 - 12844.003: 97.3078% ( 5) 00:10:03.258 12844.003 - 12896.643: 97.3298% ( 3) 00:10:03.258 12896.643 - 12949.282: 97.3665% ( 5) 00:10:03.258 12949.282 - 13001.921: 97.4105% ( 6) 00:10:03.258 13001.921 - 13054.561: 97.4178% ( 1) 00:10:03.258 13054.561 - 13107.200: 97.4619% ( 6) 00:10:03.258 13107.200 - 13159.839: 97.4839% ( 3) 00:10:03.258 13159.839 - 13212.479: 97.5279% ( 6) 00:10:03.258 13212.479 - 13265.118: 97.5499% ( 3) 00:10:03.258 13265.118 - 13317.757: 97.5866% ( 5) 00:10:03.258 13317.757 - 13370.397: 97.6159% ( 4) 00:10:03.258 13370.397 - 13423.036: 97.6232% ( 1) 00:10:03.258 13423.036 - 13475.676: 97.6306% ( 1) 00:10:03.258 13475.676 - 13580.954: 97.6526% ( 3) 00:10:03.258 13580.954 - 13686.233: 97.6673% ( 2) 00:10:03.258 13686.233 - 13791.512: 97.7039% ( 5) 00:10:03.258 13791.512 - 13896.790: 97.7259% ( 3) 00:10:03.258 13896.790 - 14002.069: 97.7553% ( 4) 00:10:03.258 14002.069 - 14107.348: 97.7846% ( 4) 00:10:03.259 14107.348 - 14212.627: 97.8140% ( 4) 00:10:03.259 14212.627 - 14317.905: 97.8506% ( 5) 00:10:03.259 14317.905 - 14423.184: 97.8800% ( 4) 00:10:03.259 14423.184 - 14528.463: 97.9020% ( 3) 00:10:03.259 14528.463 - 14633.741: 97.9313% ( 4) 00:10:03.259 14633.741 - 14739.020: 97.9607% ( 4) 00:10:03.259 14739.020 - 14844.299: 97.9900% ( 4) 00:10:03.259 14844.299 - 14949.578: 98.0120% ( 3) 00:10:03.259 14949.578 - 15054.856: 98.0487% ( 5) 00:10:03.259 15054.856 - 15160.135: 98.0707% ( 3) 00:10:03.259 15160.135 - 15265.414: 98.1001% ( 4) 00:10:03.259 15265.414 - 15370.692: 98.1221% ( 3) 00:10:03.259 17055.152 - 17160.431: 98.1661% ( 6) 00:10:03.259 17160.431 - 17265.709: 98.1954% ( 4) 00:10:03.259 17265.709 - 17370.988: 98.2321% ( 5) 00:10:03.259 17370.988 - 17476.267: 98.2614% ( 4) 00:10:03.259 17476.267 - 17581.545: 98.3055% ( 6) 00:10:03.259 17581.545 - 17686.824: 98.3568% ( 7) 00:10:03.259 17686.824 - 17792.103: 98.4228% ( 9) 00:10:03.259 17792.103 - 17897.382: 98.4888% ( 9) 00:10:03.259 17897.382 - 18002.660: 98.5622% ( 10) 00:10:03.259 18002.660 - 18107.939: 98.6209% ( 8) 00:10:03.259 18107.939 - 18213.218: 98.6942% ( 10) 00:10:03.259 18213.218 - 18318.496: 98.7529% ( 8) 00:10:03.259 18318.496 - 18423.775: 98.8190% ( 9) 00:10:03.259 18423.775 - 18529.054: 98.8556% ( 5) 00:10:03.259 18529.054 - 18634.333: 98.8850% ( 4) 00:10:03.259 18634.333 - 18739.611: 98.9143% ( 4) 00:10:03.259 18739.611 - 18844.890: 98.9510% ( 5) 00:10:03.259 18844.890 - 18950.169: 98.9730% ( 3) 00:10:03.259 18950.169 - 19055.447: 99.0023% ( 4) 00:10:03.259 19055.447 - 19160.726: 99.0317% ( 4) 00:10:03.259 19160.726 - 19266.005: 99.0610% ( 4) 00:10:03.259 32425.844 - 32636.402: 99.0830% ( 3) 00:10:03.259 32636.402 - 32846.959: 99.1271% ( 6) 00:10:03.259 32846.959 - 33057.516: 99.1784% ( 7) 00:10:03.259 33057.516 - 33268.074: 99.2224% ( 6) 00:10:03.259 33268.074 - 33478.631: 99.2738% ( 7) 00:10:03.259 33478.631 - 33689.189: 99.3178% ( 6) 00:10:03.259 33689.189 - 33899.746: 99.3691% ( 7) 00:10:03.259 33899.746 - 34110.304: 99.4205% ( 7) 00:10:03.259 34110.304 - 34320.861: 99.4718% ( 7) 00:10:03.259 34320.861 - 34531.418: 99.5232% ( 7) 00:10:03.259 34531.418 - 34741.976: 99.5305% ( 1) 00:10:03.259 39584.797 - 39795.354: 99.5745% ( 6) 00:10:03.259 39795.354 - 40005.912: 99.6259% ( 7) 00:10:03.259 40005.912 - 40216.469: 99.6699% ( 6) 00:10:03.259 40216.469 - 40427.027: 99.7212% ( 7) 00:10:03.259 40427.027 - 40637.584: 99.7726% ( 7) 00:10:03.259 40637.584 - 40848.141: 99.8313% ( 8) 00:10:03.259 40848.141 - 41058.699: 99.8826% ( 7) 00:10:03.259 41058.699 - 41269.256: 99.9340% ( 7) 00:10:03.259 41269.256 - 41479.814: 99.9853% ( 7) 00:10:03.259 41479.814 - 41690.371: 100.0000% ( 2) 00:10:03.259 00:10:03.259 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:03.259 ============================================================================== 00:10:03.259 Range in us Cumulative IO count 00:10:03.259 8001.182 - 8053.822: 0.0073% ( 1) 00:10:03.259 8053.822 - 8106.461: 0.1027% ( 13) 00:10:03.259 8106.461 - 8159.100: 0.3815% ( 38) 00:10:03.259 8159.100 - 8211.740: 0.9390% ( 76) 00:10:03.259 8211.740 - 8264.379: 2.1200% ( 161) 00:10:03.259 8264.379 - 8317.018: 3.7925% ( 228) 00:10:03.259 8317.018 - 8369.658: 6.1840% ( 326) 00:10:03.259 8369.658 - 8422.297: 9.1623% ( 406) 00:10:03.259 8422.297 - 8474.937: 12.5440% ( 461) 00:10:03.259 8474.937 - 8527.576: 16.2265% ( 502) 00:10:03.259 8527.576 - 8580.215: 20.2905% ( 554) 00:10:03.259 8580.215 - 8632.855: 24.6185% ( 590) 00:10:03.259 8632.855 - 8685.494: 29.1227% ( 614) 00:10:03.259 8685.494 - 8738.133: 33.9569% ( 659) 00:10:03.259 8738.133 - 8790.773: 38.9378% ( 679) 00:10:03.259 8790.773 - 8843.412: 43.9701% ( 686) 00:10:03.259 8843.412 - 8896.051: 49.0904% ( 698) 00:10:03.259 8896.051 - 8948.691: 54.2620% ( 705) 00:10:03.259 8948.691 - 9001.330: 59.3677% ( 696) 00:10:03.259 9001.330 - 9053.969: 64.1725% ( 655) 00:10:03.259 9053.969 - 9106.609: 68.4346% ( 581) 00:10:03.259 9106.609 - 9159.248: 72.2491% ( 520) 00:10:03.259 9159.248 - 9211.888: 75.5135% ( 445) 00:10:03.259 9211.888 - 9264.527: 78.1910% ( 365) 00:10:03.259 9264.527 - 9317.166: 80.4504% ( 308) 00:10:03.259 9317.166 - 9369.806: 82.3063% ( 253) 00:10:03.259 9369.806 - 9422.445: 83.8102% ( 205) 00:10:03.259 9422.445 - 9475.084: 85.1673% ( 185) 00:10:03.259 9475.084 - 9527.724: 86.2823% ( 152) 00:10:03.259 9527.724 - 9580.363: 87.2139% ( 127) 00:10:03.259 9580.363 - 9633.002: 87.9621% ( 102) 00:10:03.259 9633.002 - 9685.642: 88.5563% ( 81) 00:10:03.259 9685.642 - 9738.281: 89.0478% ( 67) 00:10:03.259 9738.281 - 9790.920: 89.4806% ( 59) 00:10:03.259 9790.920 - 9843.560: 89.7667% ( 39) 00:10:03.259 9843.560 - 9896.199: 90.0308% ( 36) 00:10:03.259 9896.199 - 9948.839: 90.2949% ( 36) 00:10:03.259 9948.839 - 10001.478: 90.5296% ( 32) 00:10:03.259 10001.478 - 10054.117: 90.8084% ( 38) 00:10:03.259 10054.117 - 10106.757: 91.0285% ( 30) 00:10:03.259 10106.757 - 10159.396: 91.2779% ( 34) 00:10:03.259 10159.396 - 10212.035: 91.4759% ( 27) 00:10:03.259 10212.035 - 10264.675: 91.6153% ( 19) 00:10:03.259 10264.675 - 10317.314: 91.7327% ( 16) 00:10:03.259 10317.314 - 10369.953: 91.8060% ( 10) 00:10:03.259 10369.953 - 10422.593: 91.9087% ( 14) 00:10:03.259 10422.593 - 10475.232: 91.9968% ( 12) 00:10:03.259 10475.232 - 10527.871: 92.0848% ( 12) 00:10:03.259 10527.871 - 10580.511: 92.1948% ( 15) 00:10:03.259 10580.511 - 10633.150: 92.3489% ( 21) 00:10:03.259 10633.150 - 10685.790: 92.5396% ( 26) 00:10:03.259 10685.790 - 10738.429: 92.7964% ( 35) 00:10:03.259 10738.429 - 10791.068: 93.0458% ( 34) 00:10:03.259 10791.068 - 10843.708: 93.2805% ( 32) 00:10:03.259 10843.708 - 10896.347: 93.5006% ( 30) 00:10:03.259 10896.347 - 10948.986: 93.7427% ( 33) 00:10:03.259 10948.986 - 11001.626: 93.9554% ( 29) 00:10:03.259 11001.626 - 11054.265: 94.1681% ( 29) 00:10:03.259 11054.265 - 11106.904: 94.3882% ( 30) 00:10:03.259 11106.904 - 11159.544: 94.6156% ( 31) 00:10:03.259 11159.544 - 11212.183: 94.8577% ( 33) 00:10:03.259 11212.183 - 11264.822: 95.0998% ( 33) 00:10:03.259 11264.822 - 11317.462: 95.3198% ( 30) 00:10:03.259 11317.462 - 11370.101: 95.5252% ( 28) 00:10:03.259 11370.101 - 11422.741: 95.7233% ( 27) 00:10:03.259 11422.741 - 11475.380: 95.9434% ( 30) 00:10:03.259 11475.380 - 11528.019: 96.1561% ( 29) 00:10:03.259 11528.019 - 11580.659: 96.2955% ( 19) 00:10:03.259 11580.659 - 11633.298: 96.3688% ( 10) 00:10:03.259 11633.298 - 11685.937: 96.4202% ( 7) 00:10:03.259 11685.937 - 11738.577: 96.4569% ( 5) 00:10:03.259 11738.577 - 11791.216: 96.4935% ( 5) 00:10:03.259 11791.216 - 11843.855: 96.5156% ( 3) 00:10:03.259 11843.855 - 11896.495: 96.5449% ( 4) 00:10:03.259 11896.495 - 11949.134: 96.5742% ( 4) 00:10:03.259 11949.134 - 12001.773: 96.6036% ( 4) 00:10:03.259 12001.773 - 12054.413: 96.6329% ( 4) 00:10:03.259 12054.413 - 12107.052: 96.6549% ( 3) 00:10:03.259 12107.052 - 12159.692: 96.6843% ( 4) 00:10:03.259 12159.692 - 12212.331: 96.7136% ( 4) 00:10:03.259 12212.331 - 12264.970: 96.7503% ( 5) 00:10:03.260 12264.970 - 12317.610: 96.7870% ( 5) 00:10:03.260 12317.610 - 12370.249: 96.8163% ( 4) 00:10:03.260 12370.249 - 12422.888: 96.8530% ( 5) 00:10:03.260 12422.888 - 12475.528: 96.8823% ( 4) 00:10:03.260 12475.528 - 12528.167: 96.9117% ( 4) 00:10:03.260 12528.167 - 12580.806: 96.9484% ( 5) 00:10:03.260 12580.806 - 12633.446: 96.9704% ( 3) 00:10:03.260 12633.446 - 12686.085: 96.9924% ( 3) 00:10:03.260 12686.085 - 12738.724: 97.0070% ( 2) 00:10:03.260 12738.724 - 12791.364: 97.0290% ( 3) 00:10:03.260 12791.364 - 12844.003: 97.0437% ( 2) 00:10:03.260 12844.003 - 12896.643: 97.0657% ( 3) 00:10:03.260 12896.643 - 12949.282: 97.0804% ( 2) 00:10:03.260 12949.282 - 13001.921: 97.1097% ( 4) 00:10:03.260 13001.921 - 13054.561: 97.1538% ( 6) 00:10:03.260 13054.561 - 13107.200: 97.1904% ( 5) 00:10:03.260 13107.200 - 13159.839: 97.2271% ( 5) 00:10:03.260 13159.839 - 13212.479: 97.2785% ( 7) 00:10:03.260 13212.479 - 13265.118: 97.3225% ( 6) 00:10:03.260 13265.118 - 13317.757: 97.3592% ( 5) 00:10:03.260 13317.757 - 13370.397: 97.4032% ( 6) 00:10:03.260 13370.397 - 13423.036: 97.4472% ( 6) 00:10:03.260 13423.036 - 13475.676: 97.4765% ( 4) 00:10:03.260 13475.676 - 13580.954: 97.5646% ( 12) 00:10:03.260 13580.954 - 13686.233: 97.6379% ( 10) 00:10:03.260 13686.233 - 13791.512: 97.7186% ( 11) 00:10:03.260 13791.512 - 13896.790: 97.7993% ( 11) 00:10:03.260 13896.790 - 14002.069: 97.8727% ( 10) 00:10:03.260 14002.069 - 14107.348: 97.9167% ( 6) 00:10:03.260 14107.348 - 14212.627: 97.9460% ( 4) 00:10:03.260 14212.627 - 14317.905: 97.9754% ( 4) 00:10:03.260 14317.905 - 14423.184: 98.0047% ( 4) 00:10:03.260 14423.184 - 14528.463: 98.0340% ( 4) 00:10:03.260 14528.463 - 14633.741: 98.0634% ( 4) 00:10:03.260 14633.741 - 14739.020: 98.1001% ( 5) 00:10:03.260 14739.020 - 14844.299: 98.1221% ( 3) 00:10:03.260 17160.431 - 17265.709: 98.1441% ( 3) 00:10:03.260 17265.709 - 17370.988: 98.1808% ( 5) 00:10:03.260 17370.988 - 17476.267: 98.2174% ( 5) 00:10:03.260 17476.267 - 17581.545: 98.2541% ( 5) 00:10:03.260 17581.545 - 17686.824: 98.2981% ( 6) 00:10:03.260 17686.824 - 17792.103: 98.3715% ( 10) 00:10:03.260 17792.103 - 17897.382: 98.4375% ( 9) 00:10:03.260 17897.382 - 18002.660: 98.5109% ( 10) 00:10:03.260 18002.660 - 18107.939: 98.5915% ( 11) 00:10:03.260 18107.939 - 18213.218: 98.6649% ( 10) 00:10:03.260 18213.218 - 18318.496: 98.7309% ( 9) 00:10:03.260 18318.496 - 18423.775: 98.7896% ( 8) 00:10:03.260 18423.775 - 18529.054: 98.8556% ( 9) 00:10:03.260 18529.054 - 18634.333: 98.9217% ( 9) 00:10:03.260 18634.333 - 18739.611: 98.9583% ( 5) 00:10:03.260 18739.611 - 18844.890: 98.9877% ( 4) 00:10:03.260 18844.890 - 18950.169: 99.0244% ( 5) 00:10:03.260 18950.169 - 19055.447: 99.0610% ( 5) 00:10:03.260 30320.270 - 30530.827: 99.0904% ( 4) 00:10:03.260 30530.827 - 30741.385: 99.1417% ( 7) 00:10:03.260 30741.385 - 30951.942: 99.1857% ( 6) 00:10:03.260 30951.942 - 31162.500: 99.2371% ( 7) 00:10:03.260 31162.500 - 31373.057: 99.2884% ( 7) 00:10:03.260 31373.057 - 31583.614: 99.3398% ( 7) 00:10:03.260 31583.614 - 31794.172: 99.3911% ( 7) 00:10:03.260 31794.172 - 32004.729: 99.4425% ( 7) 00:10:03.260 32004.729 - 32215.287: 99.4938% ( 7) 00:10:03.260 32215.287 - 32425.844: 99.5305% ( 5) 00:10:03.260 37479.222 - 37689.780: 99.5819% ( 7) 00:10:03.260 37689.780 - 37900.337: 99.6332% ( 7) 00:10:03.260 37900.337 - 38110.895: 99.6772% ( 6) 00:10:03.260 38110.895 - 38321.452: 99.7359% ( 8) 00:10:03.260 38321.452 - 38532.010: 99.7873% ( 7) 00:10:03.260 38532.010 - 38742.567: 99.8313% ( 6) 00:10:03.260 38742.567 - 38953.124: 99.8826% ( 7) 00:10:03.260 38953.124 - 39163.682: 99.9340% ( 7) 00:10:03.260 39163.682 - 39374.239: 99.9853% ( 7) 00:10:03.260 39374.239 - 39584.797: 100.0000% ( 2) 00:10:03.260 00:10:03.260 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:03.260 ============================================================================== 00:10:03.260 Range in us Cumulative IO count 00:10:03.260 8053.822 - 8106.461: 0.0734% ( 10) 00:10:03.260 8106.461 - 8159.100: 0.3374% ( 36) 00:10:03.260 8159.100 - 8211.740: 0.9536% ( 84) 00:10:03.260 8211.740 - 8264.379: 2.1200% ( 159) 00:10:03.260 8264.379 - 8317.018: 3.7925% ( 228) 00:10:03.260 8317.018 - 8369.658: 5.9786% ( 298) 00:10:03.260 8369.658 - 8422.297: 8.6634% ( 366) 00:10:03.260 8422.297 - 8474.937: 11.8104% ( 429) 00:10:03.260 8474.937 - 8527.576: 15.4636% ( 498) 00:10:03.260 8527.576 - 8580.215: 19.5569% ( 558) 00:10:03.260 8580.215 - 8632.855: 23.8630% ( 587) 00:10:03.260 8632.855 - 8685.494: 28.4991% ( 632) 00:10:03.260 8685.494 - 8738.133: 33.4727% ( 678) 00:10:03.260 8738.133 - 8790.773: 38.6444% ( 705) 00:10:03.260 8790.773 - 8843.412: 43.8160% ( 705) 00:10:03.260 8843.412 - 8896.051: 49.0023% ( 707) 00:10:03.260 8896.051 - 8948.691: 54.2767% ( 719) 00:10:03.260 8948.691 - 9001.330: 59.3897% ( 697) 00:10:03.260 9001.330 - 9053.969: 64.1799% ( 653) 00:10:03.260 9053.969 - 9106.609: 68.4639% ( 584) 00:10:03.260 9106.609 - 9159.248: 72.2198% ( 512) 00:10:03.260 9159.248 - 9211.888: 75.5135% ( 449) 00:10:03.260 9211.888 - 9264.527: 78.2937% ( 379) 00:10:03.260 9264.527 - 9317.166: 80.7365% ( 333) 00:10:03.260 9317.166 - 9369.806: 82.7245% ( 271) 00:10:03.260 9369.806 - 9422.445: 84.3677% ( 224) 00:10:03.260 9422.445 - 9475.084: 85.7174% ( 184) 00:10:03.260 9475.084 - 9527.724: 86.8031% ( 148) 00:10:03.260 9527.724 - 9580.363: 87.5954% ( 108) 00:10:03.260 9580.363 - 9633.002: 88.2262% ( 86) 00:10:03.260 9633.002 - 9685.642: 88.7397% ( 70) 00:10:03.260 9685.642 - 9738.281: 89.1652% ( 58) 00:10:03.260 9738.281 - 9790.920: 89.5393% ( 51) 00:10:03.260 9790.920 - 9843.560: 89.8621% ( 44) 00:10:03.260 9843.560 - 9896.199: 90.1775% ( 43) 00:10:03.260 9896.199 - 9948.839: 90.4710% ( 40) 00:10:03.260 9948.839 - 10001.478: 90.7644% ( 40) 00:10:03.260 10001.478 - 10054.117: 91.0065% ( 33) 00:10:03.260 10054.117 - 10106.757: 91.2779% ( 37) 00:10:03.260 10106.757 - 10159.396: 91.4833% ( 28) 00:10:03.260 10159.396 - 10212.035: 91.6520% ( 23) 00:10:03.260 10212.035 - 10264.675: 91.7767% ( 17) 00:10:03.260 10264.675 - 10317.314: 91.8867% ( 15) 00:10:03.260 10317.314 - 10369.953: 91.9894% ( 14) 00:10:03.260 10369.953 - 10422.593: 92.0848% ( 13) 00:10:03.260 10422.593 - 10475.232: 92.1875% ( 14) 00:10:03.260 10475.232 - 10527.871: 92.2755% ( 12) 00:10:03.260 10527.871 - 10580.511: 92.4002% ( 17) 00:10:03.260 10580.511 - 10633.150: 92.5543% ( 21) 00:10:03.260 10633.150 - 10685.790: 92.7597% ( 28) 00:10:03.260 10685.790 - 10738.429: 92.9798% ( 30) 00:10:03.260 10738.429 - 10791.068: 93.2218% ( 33) 00:10:03.260 10791.068 - 10843.708: 93.4419% ( 30) 00:10:03.260 10843.708 - 10896.347: 93.6987% ( 35) 00:10:03.260 10896.347 - 10948.986: 93.9554% ( 35) 00:10:03.260 10948.986 - 11001.626: 94.1755% ( 30) 00:10:03.260 11001.626 - 11054.265: 94.4322% ( 35) 00:10:03.260 11054.265 - 11106.904: 94.6816% ( 34) 00:10:03.260 11106.904 - 11159.544: 94.8797% ( 27) 00:10:03.260 11159.544 - 11212.183: 95.1071% ( 31) 00:10:03.260 11212.183 - 11264.822: 95.3125% ( 28) 00:10:03.260 11264.822 - 11317.462: 95.5326% ( 30) 00:10:03.260 11317.462 - 11370.101: 95.7526% ( 30) 00:10:03.260 11370.101 - 11422.741: 95.9800% ( 31) 00:10:03.260 11422.741 - 11475.380: 96.1781% ( 27) 00:10:03.260 11475.380 - 11528.019: 96.3542% ( 24) 00:10:03.260 11528.019 - 11580.659: 96.5229% ( 23) 00:10:03.260 11580.659 - 11633.298: 96.6329% ( 15) 00:10:03.260 11633.298 - 11685.937: 96.7063% ( 10) 00:10:03.260 11685.937 - 11738.577: 96.7723% ( 9) 00:10:03.260 11738.577 - 11791.216: 96.8310% ( 8) 00:10:03.260 11791.216 - 11843.855: 96.8750% ( 6) 00:10:03.260 11843.855 - 11896.495: 96.9190% ( 6) 00:10:03.260 11896.495 - 11949.134: 96.9557% ( 5) 00:10:03.260 11949.134 - 12001.773: 96.9924% ( 5) 00:10:03.260 12001.773 - 12054.413: 97.0290% ( 5) 00:10:03.260 12054.413 - 12107.052: 97.0657% ( 5) 00:10:03.261 12107.052 - 12159.692: 97.0804% ( 2) 00:10:03.261 12159.692 - 12212.331: 97.1024% ( 3) 00:10:03.261 12212.331 - 12264.970: 97.1171% ( 2) 00:10:03.261 12264.970 - 12317.610: 97.1391% ( 3) 00:10:03.261 12317.610 - 12370.249: 97.1538% ( 2) 00:10:03.261 12370.249 - 12422.888: 97.1684% ( 2) 00:10:03.261 12422.888 - 12475.528: 97.1831% ( 2) 00:10:03.261 12844.003 - 12896.643: 97.1978% ( 2) 00:10:03.261 12896.643 - 12949.282: 97.2124% ( 2) 00:10:03.261 12949.282 - 13001.921: 97.2271% ( 2) 00:10:03.261 13001.921 - 13054.561: 97.2418% ( 2) 00:10:03.261 13054.561 - 13107.200: 97.2638% ( 3) 00:10:03.261 13107.200 - 13159.839: 97.2858% ( 3) 00:10:03.261 13159.839 - 13212.479: 97.3005% ( 2) 00:10:03.261 13212.479 - 13265.118: 97.3151% ( 2) 00:10:03.261 13265.118 - 13317.757: 97.3298% ( 2) 00:10:03.261 13317.757 - 13370.397: 97.3445% ( 2) 00:10:03.261 13370.397 - 13423.036: 97.3592% ( 2) 00:10:03.261 13423.036 - 13475.676: 97.3738% ( 2) 00:10:03.261 13475.676 - 13580.954: 97.4032% ( 4) 00:10:03.261 13580.954 - 13686.233: 97.4398% ( 5) 00:10:03.261 13686.233 - 13791.512: 97.4619% ( 3) 00:10:03.261 13791.512 - 13896.790: 97.4985% ( 5) 00:10:03.261 13896.790 - 14002.069: 97.5279% ( 4) 00:10:03.261 14002.069 - 14107.348: 97.5572% ( 4) 00:10:03.261 14107.348 - 14212.627: 97.5939% ( 5) 00:10:03.261 14212.627 - 14317.905: 97.6452% ( 7) 00:10:03.261 14317.905 - 14423.184: 97.7113% ( 9) 00:10:03.261 14423.184 - 14528.463: 97.7626% ( 7) 00:10:03.261 14528.463 - 14633.741: 97.8140% ( 7) 00:10:03.261 14633.741 - 14739.020: 97.8580% ( 6) 00:10:03.261 14739.020 - 14844.299: 97.9020% ( 6) 00:10:03.261 14844.299 - 14949.578: 97.9460% ( 6) 00:10:03.261 14949.578 - 15054.856: 97.9827% ( 5) 00:10:03.261 15054.856 - 15160.135: 98.0267% ( 6) 00:10:03.261 15160.135 - 15265.414: 98.0707% ( 6) 00:10:03.261 15265.414 - 15370.692: 98.1147% ( 6) 00:10:03.261 15370.692 - 15475.971: 98.1221% ( 1) 00:10:03.261 16318.201 - 16423.480: 98.1587% ( 5) 00:10:03.261 16423.480 - 16528.758: 98.1881% ( 4) 00:10:03.261 16528.758 - 16634.037: 98.2248% ( 5) 00:10:03.261 16634.037 - 16739.316: 98.2614% ( 5) 00:10:03.261 16739.316 - 16844.594: 98.2908% ( 4) 00:10:03.261 16844.594 - 16949.873: 98.3275% ( 5) 00:10:03.261 16949.873 - 17055.152: 98.3641% ( 5) 00:10:03.261 17055.152 - 17160.431: 98.4008% ( 5) 00:10:03.261 17160.431 - 17265.709: 98.4302% ( 4) 00:10:03.261 17265.709 - 17370.988: 98.4668% ( 5) 00:10:03.261 17370.988 - 17476.267: 98.5035% ( 5) 00:10:03.261 17476.267 - 17581.545: 98.5402% ( 5) 00:10:03.261 17581.545 - 17686.824: 98.5769% ( 5) 00:10:03.261 17686.824 - 17792.103: 98.5915% ( 2) 00:10:03.261 18107.939 - 18213.218: 98.6209% ( 4) 00:10:03.261 18213.218 - 18318.496: 98.6576% ( 5) 00:10:03.261 18318.496 - 18423.775: 98.6942% ( 5) 00:10:03.261 18423.775 - 18529.054: 98.7309% ( 5) 00:10:03.261 18529.054 - 18634.333: 98.7676% ( 5) 00:10:03.261 18634.333 - 18739.611: 98.8116% ( 6) 00:10:03.261 18739.611 - 18844.890: 98.8483% ( 5) 00:10:03.261 18844.890 - 18950.169: 98.8850% ( 5) 00:10:03.261 18950.169 - 19055.447: 98.9217% ( 5) 00:10:03.261 19055.447 - 19160.726: 98.9583% ( 5) 00:10:03.261 19160.726 - 19266.005: 99.0023% ( 6) 00:10:03.261 19266.005 - 19371.284: 99.0317% ( 4) 00:10:03.261 19371.284 - 19476.562: 99.0610% ( 4) 00:10:03.261 28846.368 - 29056.925: 99.1050% ( 6) 00:10:03.261 29056.925 - 29267.483: 99.1564% ( 7) 00:10:03.261 29267.483 - 29478.040: 99.2151% ( 8) 00:10:03.261 29478.040 - 29688.598: 99.2591% ( 6) 00:10:03.261 29688.598 - 29899.155: 99.3178% ( 8) 00:10:03.261 29899.155 - 30109.712: 99.3691% ( 7) 00:10:03.261 30109.712 - 30320.270: 99.4131% ( 6) 00:10:03.261 30320.270 - 30530.827: 99.4645% ( 7) 00:10:03.261 30530.827 - 30741.385: 99.5158% ( 7) 00:10:03.261 30741.385 - 30951.942: 99.5305% ( 2) 00:10:03.261 35794.763 - 36005.320: 99.5672% ( 5) 00:10:03.261 36005.320 - 36215.878: 99.6259% ( 8) 00:10:03.261 36215.878 - 36426.435: 99.6772% ( 7) 00:10:03.261 36426.435 - 36636.993: 99.7286% ( 7) 00:10:03.261 36636.993 - 36847.550: 99.7726% ( 6) 00:10:03.261 36847.550 - 37058.108: 99.8239% ( 7) 00:10:03.261 37058.108 - 37268.665: 99.8753% ( 7) 00:10:03.261 37268.665 - 37479.222: 99.9266% ( 7) 00:10:03.261 37479.222 - 37689.780: 99.9780% ( 7) 00:10:03.261 37689.780 - 37900.337: 100.0000% ( 3) 00:10:03.261 00:10:03.261 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:03.261 ============================================================================== 00:10:03.261 Range in us Cumulative IO count 00:10:03.261 8053.822 - 8106.461: 0.0954% ( 13) 00:10:03.261 8106.461 - 8159.100: 0.4181% ( 44) 00:10:03.261 8159.100 - 8211.740: 1.0417% ( 85) 00:10:03.261 8211.740 - 8264.379: 2.1934% ( 157) 00:10:03.261 8264.379 - 8317.018: 3.7999% ( 219) 00:10:03.261 8317.018 - 8369.658: 5.8319% ( 277) 00:10:03.261 8369.658 - 8422.297: 8.3994% ( 350) 00:10:03.261 8422.297 - 8474.937: 11.6124% ( 438) 00:10:03.261 8474.937 - 8527.576: 15.4123% ( 518) 00:10:03.261 8527.576 - 8580.215: 19.4029% ( 544) 00:10:03.261 8580.215 - 8632.855: 23.8116% ( 601) 00:10:03.261 8632.855 - 8685.494: 28.5945% ( 652) 00:10:03.261 8685.494 - 8738.133: 33.4947% ( 668) 00:10:03.261 8738.133 - 8790.773: 38.5783% ( 693) 00:10:03.261 8790.773 - 8843.412: 43.7500% ( 705) 00:10:03.261 8843.412 - 8896.051: 49.0757% ( 726) 00:10:03.261 8896.051 - 8948.691: 54.2694% ( 708) 00:10:03.261 8948.691 - 9001.330: 59.2576% ( 680) 00:10:03.261 9001.330 - 9053.969: 64.0992% ( 660) 00:10:03.261 9053.969 - 9106.609: 68.3906% ( 585) 00:10:03.261 9106.609 - 9159.248: 72.1391% ( 511) 00:10:03.261 9159.248 - 9211.888: 75.4035% ( 445) 00:10:03.261 9211.888 - 9264.527: 78.4111% ( 410) 00:10:03.261 9264.527 - 9317.166: 80.9786% ( 350) 00:10:03.261 9317.166 - 9369.806: 82.9739% ( 272) 00:10:03.261 9369.806 - 9422.445: 84.5290% ( 212) 00:10:03.261 9422.445 - 9475.084: 85.8641% ( 182) 00:10:03.261 9475.084 - 9527.724: 86.8765% ( 138) 00:10:03.261 9527.724 - 9580.363: 87.6247% ( 102) 00:10:03.261 9580.363 - 9633.002: 88.2482% ( 85) 00:10:03.261 9633.002 - 9685.642: 88.7471% ( 68) 00:10:03.261 9685.642 - 9738.281: 89.1945% ( 61) 00:10:03.261 9738.281 - 9790.920: 89.5540% ( 49) 00:10:03.261 9790.920 - 9843.560: 89.8841% ( 45) 00:10:03.261 9843.560 - 9896.199: 90.1482% ( 36) 00:10:03.261 9896.199 - 9948.839: 90.4049% ( 35) 00:10:03.261 9948.839 - 10001.478: 90.6837% ( 38) 00:10:03.261 10001.478 - 10054.117: 90.9624% ( 38) 00:10:03.261 10054.117 - 10106.757: 91.2119% ( 34) 00:10:03.261 10106.757 - 10159.396: 91.4099% ( 27) 00:10:03.261 10159.396 - 10212.035: 91.5346% ( 17) 00:10:03.261 10212.035 - 10264.675: 91.6813% ( 20) 00:10:03.261 10264.675 - 10317.314: 91.8134% ( 18) 00:10:03.261 10317.314 - 10369.953: 91.9381% ( 17) 00:10:03.261 10369.953 - 10422.593: 92.0555% ( 16) 00:10:03.261 10422.593 - 10475.232: 92.1728% ( 16) 00:10:03.261 10475.232 - 10527.871: 92.2755% ( 14) 00:10:03.261 10527.871 - 10580.511: 92.3856% ( 15) 00:10:03.261 10580.511 - 10633.150: 92.5029% ( 16) 00:10:03.261 10633.150 - 10685.790: 92.6570% ( 21) 00:10:03.261 10685.790 - 10738.429: 92.8844% ( 31) 00:10:03.261 10738.429 - 10791.068: 93.1265% ( 33) 00:10:03.261 10791.068 - 10843.708: 93.3832% ( 35) 00:10:03.261 10843.708 - 10896.347: 93.6253% ( 33) 00:10:03.261 10896.347 - 10948.986: 93.9040% ( 38) 00:10:03.261 10948.986 - 11001.626: 94.1535% ( 34) 00:10:03.261 11001.626 - 11054.265: 94.4322% ( 38) 00:10:03.261 11054.265 - 11106.904: 94.7110% ( 38) 00:10:03.261 11106.904 - 11159.544: 94.9604% ( 34) 00:10:03.261 11159.544 - 11212.183: 95.1805% ( 30) 00:10:03.261 11212.183 - 11264.822: 95.4299% ( 34) 00:10:03.261 11264.822 - 11317.462: 95.6573% ( 31) 00:10:03.261 11317.462 - 11370.101: 95.9067% ( 34) 00:10:03.261 11370.101 - 11422.741: 96.0901% ( 25) 00:10:03.261 11422.741 - 11475.380: 96.2955% ( 28) 00:10:03.261 11475.380 - 11528.019: 96.4642% ( 23) 00:10:03.261 11528.019 - 11580.659: 96.6036% ( 19) 00:10:03.262 11580.659 - 11633.298: 96.7210% ( 16) 00:10:03.262 11633.298 - 11685.937: 96.8163% ( 13) 00:10:03.262 11685.937 - 11738.577: 96.8750% ( 8) 00:10:03.262 11738.577 - 11791.216: 96.9117% ( 5) 00:10:03.262 11791.216 - 11843.855: 96.9484% ( 5) 00:10:03.262 11843.855 - 11896.495: 96.9850% ( 5) 00:10:03.262 11896.495 - 11949.134: 97.0144% ( 4) 00:10:03.262 11949.134 - 12001.773: 97.0584% ( 6) 00:10:03.262 12001.773 - 12054.413: 97.0877% ( 4) 00:10:03.262 12054.413 - 12107.052: 97.1244% ( 5) 00:10:03.262 12107.052 - 12159.692: 97.1391% ( 2) 00:10:03.262 12159.692 - 12212.331: 97.1611% ( 3) 00:10:03.262 12212.331 - 12264.970: 97.1831% ( 3) 00:10:03.262 12686.085 - 12738.724: 97.1978% ( 2) 00:10:03.262 12738.724 - 12791.364: 97.2124% ( 2) 00:10:03.262 12791.364 - 12844.003: 97.2271% ( 2) 00:10:03.262 12844.003 - 12896.643: 97.2491% ( 3) 00:10:03.262 12896.643 - 12949.282: 97.2638% ( 2) 00:10:03.262 12949.282 - 13001.921: 97.2785% ( 2) 00:10:03.262 13001.921 - 13054.561: 97.2931% ( 2) 00:10:03.262 13054.561 - 13107.200: 97.3078% ( 2) 00:10:03.262 13107.200 - 13159.839: 97.3225% ( 2) 00:10:03.262 13159.839 - 13212.479: 97.3371% ( 2) 00:10:03.262 13212.479 - 13265.118: 97.3592% ( 3) 00:10:03.262 13265.118 - 13317.757: 97.3738% ( 2) 00:10:03.262 13317.757 - 13370.397: 97.3885% ( 2) 00:10:03.262 13370.397 - 13423.036: 97.4032% ( 2) 00:10:03.262 13423.036 - 13475.676: 97.4178% ( 2) 00:10:03.262 13475.676 - 13580.954: 97.4545% ( 5) 00:10:03.262 13580.954 - 13686.233: 97.4839% ( 4) 00:10:03.262 13686.233 - 13791.512: 97.5132% ( 4) 00:10:03.262 13791.512 - 13896.790: 97.5425% ( 4) 00:10:03.262 13896.790 - 14002.069: 97.5719% ( 4) 00:10:03.262 14002.069 - 14107.348: 97.6086% ( 5) 00:10:03.262 14107.348 - 14212.627: 97.6379% ( 4) 00:10:03.262 14212.627 - 14317.905: 97.6526% ( 2) 00:10:03.262 14949.578 - 15054.856: 97.6893% ( 5) 00:10:03.262 15054.856 - 15160.135: 97.7333% ( 6) 00:10:03.262 15160.135 - 15265.414: 97.7846% ( 7) 00:10:03.262 15265.414 - 15370.692: 97.8213% ( 5) 00:10:03.262 15370.692 - 15475.971: 97.8653% ( 6) 00:10:03.262 15475.971 - 15581.250: 97.9167% ( 7) 00:10:03.262 15581.250 - 15686.529: 97.9607% ( 6) 00:10:03.262 15686.529 - 15791.807: 98.0047% ( 6) 00:10:03.262 15791.807 - 15897.086: 98.0560% ( 7) 00:10:03.262 15897.086 - 16002.365: 98.1367% ( 11) 00:10:03.262 16002.365 - 16107.643: 98.2028% ( 9) 00:10:03.262 16107.643 - 16212.922: 98.2394% ( 5) 00:10:03.262 16212.922 - 16318.201: 98.2761% ( 5) 00:10:03.262 16318.201 - 16423.480: 98.3128% ( 5) 00:10:03.262 16423.480 - 16528.758: 98.3495% ( 5) 00:10:03.262 16528.758 - 16634.037: 98.3862% ( 5) 00:10:03.262 16634.037 - 16739.316: 98.4155% ( 4) 00:10:03.262 16739.316 - 16844.594: 98.4522% ( 5) 00:10:03.262 16844.594 - 16949.873: 98.4888% ( 5) 00:10:03.262 16949.873 - 17055.152: 98.5255% ( 5) 00:10:03.262 17055.152 - 17160.431: 98.5622% ( 5) 00:10:03.262 17160.431 - 17265.709: 98.5915% ( 4) 00:10:03.262 18318.496 - 18423.775: 98.6136% ( 3) 00:10:03.262 18423.775 - 18529.054: 98.6429% ( 4) 00:10:03.262 18529.054 - 18634.333: 98.6869% ( 6) 00:10:03.262 18634.333 - 18739.611: 98.7236% ( 5) 00:10:03.262 18739.611 - 18844.890: 98.7603% ( 5) 00:10:03.262 18844.890 - 18950.169: 98.7969% ( 5) 00:10:03.262 18950.169 - 19055.447: 98.8336% ( 5) 00:10:03.262 19055.447 - 19160.726: 98.8703% ( 5) 00:10:03.262 19160.726 - 19266.005: 98.9070% ( 5) 00:10:03.262 19266.005 - 19371.284: 98.9437% ( 5) 00:10:03.262 19371.284 - 19476.562: 98.9803% ( 5) 00:10:03.262 19476.562 - 19581.841: 99.0170% ( 5) 00:10:03.262 19581.841 - 19687.120: 99.0537% ( 5) 00:10:03.262 19687.120 - 19792.398: 99.0610% ( 1) 00:10:03.262 26635.515 - 26740.794: 99.0684% ( 1) 00:10:03.262 26740.794 - 26846.072: 99.0977% ( 4) 00:10:03.262 26846.072 - 26951.351: 99.1197% ( 3) 00:10:03.262 26951.351 - 27161.908: 99.1711% ( 7) 00:10:03.262 27161.908 - 27372.466: 99.2224% ( 7) 00:10:03.262 27372.466 - 27583.023: 99.2738% ( 7) 00:10:03.262 27583.023 - 27793.581: 99.3251% ( 7) 00:10:03.262 27793.581 - 28004.138: 99.3691% ( 6) 00:10:03.262 28004.138 - 28214.696: 99.4205% ( 7) 00:10:03.262 28214.696 - 28425.253: 99.4645% ( 6) 00:10:03.262 28425.253 - 28635.810: 99.5232% ( 8) 00:10:03.262 28635.810 - 28846.368: 99.5305% ( 1) 00:10:03.262 33689.189 - 33899.746: 99.5819% ( 7) 00:10:03.262 33899.746 - 34110.304: 99.6332% ( 7) 00:10:03.262 34110.304 - 34320.861: 99.6846% ( 7) 00:10:03.262 34320.861 - 34531.418: 99.7359% ( 7) 00:10:03.262 34531.418 - 34741.976: 99.7946% ( 8) 00:10:03.262 34741.976 - 34952.533: 99.8460% ( 7) 00:10:03.262 34952.533 - 35163.091: 99.8973% ( 7) 00:10:03.262 35163.091 - 35373.648: 99.9560% ( 8) 00:10:03.262 35373.648 - 35584.206: 100.0000% ( 6) 00:10:03.262 00:10:03.262 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:03.262 ============================================================================== 00:10:03.262 Range in us Cumulative IO count 00:10:03.262 8001.182 - 8053.822: 0.0147% ( 2) 00:10:03.262 8053.822 - 8106.461: 0.1467% ( 18) 00:10:03.262 8106.461 - 8159.100: 0.4842% ( 46) 00:10:03.262 8159.100 - 8211.740: 1.1737% ( 94) 00:10:03.262 8211.740 - 8264.379: 2.2227% ( 143) 00:10:03.262 8264.379 - 8317.018: 3.8586% ( 223) 00:10:03.262 8317.018 - 8369.658: 5.8832% ( 276) 00:10:03.262 8369.658 - 8422.297: 8.5167% ( 359) 00:10:03.262 8422.297 - 8474.937: 11.5977% ( 420) 00:10:03.262 8474.937 - 8527.576: 15.2802% ( 502) 00:10:03.262 8527.576 - 8580.215: 19.4029% ( 562) 00:10:03.262 8580.215 - 8632.855: 23.9657% ( 622) 00:10:03.262 8632.855 - 8685.494: 28.7485% ( 652) 00:10:03.262 8685.494 - 8738.133: 33.7735% ( 685) 00:10:03.262 8738.133 - 8790.773: 38.8131% ( 687) 00:10:03.262 8790.773 - 8843.412: 44.1094% ( 722) 00:10:03.262 8843.412 - 8896.051: 49.2004% ( 694) 00:10:03.262 8896.051 - 8948.691: 54.3647% ( 704) 00:10:03.262 8948.691 - 9001.330: 59.3750% ( 683) 00:10:03.262 9001.330 - 9053.969: 63.9378% ( 622) 00:10:03.262 9053.969 - 9106.609: 68.2512% ( 588) 00:10:03.262 9106.609 - 9159.248: 72.0584% ( 519) 00:10:03.262 9159.248 - 9211.888: 75.3668% ( 451) 00:10:03.262 9211.888 - 9264.527: 78.3304% ( 404) 00:10:03.262 9264.527 - 9317.166: 80.7805% ( 334) 00:10:03.262 9317.166 - 9369.806: 82.8712% ( 285) 00:10:03.262 9369.806 - 9422.445: 84.5217% ( 225) 00:10:03.262 9422.445 - 9475.084: 85.7541% ( 168) 00:10:03.262 9475.084 - 9527.724: 86.7444% ( 135) 00:10:03.262 9527.724 - 9580.363: 87.4780% ( 100) 00:10:03.262 9580.363 - 9633.002: 88.0502% ( 78) 00:10:03.262 9633.002 - 9685.642: 88.5563% ( 69) 00:10:03.262 9685.642 - 9738.281: 89.0258% ( 64) 00:10:03.262 9738.281 - 9790.920: 89.4293% ( 55) 00:10:03.262 9790.920 - 9843.560: 89.7741% ( 47) 00:10:03.262 9843.560 - 9896.199: 90.1262% ( 48) 00:10:03.262 9896.199 - 9948.839: 90.4710% ( 47) 00:10:03.262 9948.839 - 10001.478: 90.7497% ( 38) 00:10:03.262 10001.478 - 10054.117: 90.9991% ( 34) 00:10:03.262 10054.117 - 10106.757: 91.2192% ( 30) 00:10:03.262 10106.757 - 10159.396: 91.4393% ( 30) 00:10:03.262 10159.396 - 10212.035: 91.5566% ( 16) 00:10:03.262 10212.035 - 10264.675: 91.6593% ( 14) 00:10:03.262 10264.675 - 10317.314: 91.7474% ( 12) 00:10:03.262 10317.314 - 10369.953: 91.8060% ( 8) 00:10:03.262 10369.953 - 10422.593: 91.8867% ( 11) 00:10:03.262 10422.593 - 10475.232: 91.9748% ( 12) 00:10:03.262 10475.232 - 10527.871: 92.0628% ( 12) 00:10:03.262 10527.871 - 10580.511: 92.2022% ( 19) 00:10:03.262 10580.511 - 10633.150: 92.3415% ( 19) 00:10:03.262 10633.150 - 10685.790: 92.4736% ( 18) 00:10:03.262 10685.790 - 10738.429: 92.6937% ( 30) 00:10:03.262 10738.429 - 10791.068: 92.9357% ( 33) 00:10:03.262 10791.068 - 10843.708: 93.1411% ( 28) 00:10:03.262 10843.708 - 10896.347: 93.3759% ( 32) 00:10:03.262 10896.347 - 10948.986: 93.6106% ( 32) 00:10:03.262 10948.986 - 11001.626: 93.8380% ( 31) 00:10:03.262 11001.626 - 11054.265: 94.0948% ( 35) 00:10:03.262 11054.265 - 11106.904: 94.3735% ( 38) 00:10:03.262 11106.904 - 11159.544: 94.6303% ( 35) 00:10:03.262 11159.544 - 11212.183: 94.9090% ( 38) 00:10:03.262 11212.183 - 11264.822: 95.1585% ( 34) 00:10:03.263 11264.822 - 11317.462: 95.3932% ( 32) 00:10:03.263 11317.462 - 11370.101: 95.6279% ( 32) 00:10:03.263 11370.101 - 11422.741: 95.8553% ( 31) 00:10:03.263 11422.741 - 11475.380: 96.0534% ( 27) 00:10:03.263 11475.380 - 11528.019: 96.2588% ( 28) 00:10:03.263 11528.019 - 11580.659: 96.4129% ( 21) 00:10:03.263 11580.659 - 11633.298: 96.5449% ( 18) 00:10:03.263 11633.298 - 11685.937: 96.6696% ( 17) 00:10:03.263 11685.937 - 11738.577: 96.7650% ( 13) 00:10:03.263 11738.577 - 11791.216: 96.8383% ( 10) 00:10:03.263 11791.216 - 11843.855: 96.9190% ( 11) 00:10:03.263 11843.855 - 11896.495: 96.9704% ( 7) 00:10:03.263 11896.495 - 11949.134: 97.0217% ( 7) 00:10:03.263 11949.134 - 12001.773: 97.0364% ( 2) 00:10:03.263 12001.773 - 12054.413: 97.0584% ( 3) 00:10:03.263 12054.413 - 12107.052: 97.0731% ( 2) 00:10:03.263 12107.052 - 12159.692: 97.0951% ( 3) 00:10:03.263 12159.692 - 12212.331: 97.1171% ( 3) 00:10:03.263 12212.331 - 12264.970: 97.1317% ( 2) 00:10:03.263 12264.970 - 12317.610: 97.1464% ( 2) 00:10:03.263 12317.610 - 12370.249: 97.1611% ( 2) 00:10:03.263 12370.249 - 12422.888: 97.1831% ( 3) 00:10:03.263 12422.888 - 12475.528: 97.1904% ( 1) 00:10:03.263 12475.528 - 12528.167: 97.2051% ( 2) 00:10:03.263 12528.167 - 12580.806: 97.2198% ( 2) 00:10:03.263 12580.806 - 12633.446: 97.2418% ( 3) 00:10:03.263 12633.446 - 12686.085: 97.2565% ( 2) 00:10:03.263 12686.085 - 12738.724: 97.2711% ( 2) 00:10:03.263 12738.724 - 12791.364: 97.2858% ( 2) 00:10:03.263 12791.364 - 12844.003: 97.3005% ( 2) 00:10:03.263 12844.003 - 12896.643: 97.3151% ( 2) 00:10:03.263 12896.643 - 12949.282: 97.3298% ( 2) 00:10:03.263 12949.282 - 13001.921: 97.3445% ( 2) 00:10:03.263 13001.921 - 13054.561: 97.3592% ( 2) 00:10:03.263 13054.561 - 13107.200: 97.3812% ( 3) 00:10:03.263 13107.200 - 13159.839: 97.3958% ( 2) 00:10:03.263 13159.839 - 13212.479: 97.4105% ( 2) 00:10:03.263 13212.479 - 13265.118: 97.4252% ( 2) 00:10:03.263 13265.118 - 13317.757: 97.4398% ( 2) 00:10:03.263 13317.757 - 13370.397: 97.4545% ( 2) 00:10:03.263 13370.397 - 13423.036: 97.4765% ( 3) 00:10:03.263 13423.036 - 13475.676: 97.4912% ( 2) 00:10:03.263 13475.676 - 13580.954: 97.5132% ( 3) 00:10:03.263 13580.954 - 13686.233: 97.5499% ( 5) 00:10:03.263 13686.233 - 13791.512: 97.5719% ( 3) 00:10:03.263 13791.512 - 13896.790: 97.6012% ( 4) 00:10:03.263 13896.790 - 14002.069: 97.6306% ( 4) 00:10:03.263 14002.069 - 14107.348: 97.6526% ( 3) 00:10:03.263 15265.414 - 15370.692: 97.6893% ( 5) 00:10:03.263 15370.692 - 15475.971: 97.7186% ( 4) 00:10:03.263 15475.971 - 15581.250: 97.7626% ( 6) 00:10:03.263 15581.250 - 15686.529: 97.8506% ( 12) 00:10:03.263 15686.529 - 15791.807: 97.9167% ( 9) 00:10:03.263 15791.807 - 15897.086: 97.9900% ( 10) 00:10:03.263 15897.086 - 16002.365: 98.0634% ( 10) 00:10:03.263 16002.365 - 16107.643: 98.1514% ( 12) 00:10:03.263 16107.643 - 16212.922: 98.2321% ( 11) 00:10:03.263 16212.922 - 16318.201: 98.3128% ( 11) 00:10:03.263 16318.201 - 16423.480: 98.3862% ( 10) 00:10:03.263 16423.480 - 16528.758: 98.4668% ( 11) 00:10:03.263 16528.758 - 16634.037: 98.5475% ( 11) 00:10:03.263 16634.037 - 16739.316: 98.5915% ( 6) 00:10:03.263 18213.218 - 18318.496: 98.6136% ( 3) 00:10:03.263 18318.496 - 18423.775: 98.6576% ( 6) 00:10:03.263 18423.775 - 18529.054: 98.6942% ( 5) 00:10:03.263 18529.054 - 18634.333: 98.7309% ( 5) 00:10:03.263 18634.333 - 18739.611: 98.7603% ( 4) 00:10:03.263 18739.611 - 18844.890: 98.7969% ( 5) 00:10:03.263 18844.890 - 18950.169: 98.8410% ( 6) 00:10:03.263 18950.169 - 19055.447: 98.8776% ( 5) 00:10:03.263 19055.447 - 19160.726: 98.9143% ( 5) 00:10:03.263 19160.726 - 19266.005: 98.9510% ( 5) 00:10:03.263 19266.005 - 19371.284: 98.9877% ( 5) 00:10:03.263 19371.284 - 19476.562: 99.0317% ( 6) 00:10:03.263 19476.562 - 19581.841: 99.0610% ( 4) 00:10:03.263 24319.383 - 24424.662: 99.0684% ( 1) 00:10:03.263 24424.662 - 24529.941: 99.0977% ( 4) 00:10:03.263 24529.941 - 24635.219: 99.1197% ( 3) 00:10:03.263 24635.219 - 24740.498: 99.1417% ( 3) 00:10:03.263 24740.498 - 24845.777: 99.1711% ( 4) 00:10:03.263 24845.777 - 24951.055: 99.2004% ( 4) 00:10:03.263 24951.055 - 25056.334: 99.2224% ( 3) 00:10:03.263 25056.334 - 25161.613: 99.2518% ( 4) 00:10:03.263 25161.613 - 25266.892: 99.2738% ( 3) 00:10:03.263 25266.892 - 25372.170: 99.3031% ( 4) 00:10:03.263 25372.170 - 25477.449: 99.3251% ( 3) 00:10:03.263 25477.449 - 25582.728: 99.3471% ( 3) 00:10:03.263 25582.728 - 25688.006: 99.3691% ( 3) 00:10:03.263 25688.006 - 25793.285: 99.3985% ( 4) 00:10:03.263 25793.285 - 25898.564: 99.4205% ( 3) 00:10:03.263 25898.564 - 26003.843: 99.4498% ( 4) 00:10:03.263 26003.843 - 26109.121: 99.4718% ( 3) 00:10:03.263 26109.121 - 26214.400: 99.5012% ( 4) 00:10:03.263 26214.400 - 26319.679: 99.5305% ( 4) 00:10:03.263 31373.057 - 31583.614: 99.5819% ( 7) 00:10:03.263 31583.614 - 31794.172: 99.6332% ( 7) 00:10:03.263 31794.172 - 32004.729: 99.6846% ( 7) 00:10:03.263 32004.729 - 32215.287: 99.7359% ( 7) 00:10:03.263 32215.287 - 32425.844: 99.7726% ( 5) 00:10:03.263 32425.844 - 32636.402: 99.8166% ( 6) 00:10:03.263 32636.402 - 32846.959: 99.8680% ( 7) 00:10:03.263 32846.959 - 33057.516: 99.9266% ( 8) 00:10:03.263 33057.516 - 33268.074: 99.9780% ( 7) 00:10:03.263 33268.074 - 33478.631: 100.0000% ( 3) 00:10:03.263 00:10:03.263 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:03.263 ============================================================================== 00:10:03.263 Range in us Cumulative IO count 00:10:03.263 8001.182 - 8053.822: 0.0073% ( 1) 00:10:03.263 8053.822 - 8106.461: 0.1241% ( 16) 00:10:03.263 8106.461 - 8159.100: 0.3870% ( 36) 00:10:03.263 8159.100 - 8211.740: 0.9419% ( 76) 00:10:03.263 8211.740 - 8264.379: 1.9495% ( 138) 00:10:03.263 8264.379 - 8317.018: 3.7018% ( 240) 00:10:03.263 8317.018 - 8369.658: 5.8484% ( 294) 00:10:03.263 8369.658 - 8422.297: 8.6741% ( 387) 00:10:03.263 8422.297 - 8474.937: 11.9670% ( 451) 00:10:03.263 8474.937 - 8527.576: 15.6031% ( 498) 00:10:03.263 8527.576 - 8580.215: 19.7357% ( 566) 00:10:03.263 8580.215 - 8632.855: 24.2626% ( 620) 00:10:03.263 8632.855 - 8685.494: 28.9355% ( 640) 00:10:03.263 8685.494 - 8738.133: 33.7982% ( 666) 00:10:03.264 8738.133 - 8790.773: 38.7923% ( 684) 00:10:03.264 8790.773 - 8843.412: 43.9690% ( 709) 00:10:03.264 8843.412 - 8896.051: 49.0727% ( 699) 00:10:03.264 8896.051 - 8948.691: 54.1107% ( 690) 00:10:03.264 8948.691 - 9001.330: 59.0829% ( 681) 00:10:03.264 9001.330 - 9053.969: 63.7339% ( 637) 00:10:03.264 9053.969 - 9106.609: 68.0345% ( 589) 00:10:03.264 9106.609 - 9159.248: 71.7290% ( 506) 00:10:03.264 9159.248 - 9211.888: 75.0292% ( 452) 00:10:03.264 9211.888 - 9264.527: 77.8548% ( 387) 00:10:03.264 9264.527 - 9317.166: 80.1986% ( 321) 00:10:03.264 9317.166 - 9369.806: 82.1700% ( 270) 00:10:03.264 9369.806 - 9422.445: 83.7763% ( 220) 00:10:03.264 9422.445 - 9475.084: 85.0102% ( 169) 00:10:03.264 9475.084 - 9527.724: 86.0032% ( 136) 00:10:03.264 9527.724 - 9580.363: 86.8721% ( 119) 00:10:03.264 9580.363 - 9633.002: 87.5438% ( 92) 00:10:03.264 9633.002 - 9685.642: 88.0695% ( 72) 00:10:03.264 9685.642 - 9738.281: 88.5295% ( 63) 00:10:03.264 9738.281 - 9790.920: 88.9603% ( 59) 00:10:03.264 9790.920 - 9843.560: 89.3400% ( 52) 00:10:03.264 9843.560 - 9896.199: 89.6758% ( 46) 00:10:03.264 9896.199 - 9948.839: 90.0044% ( 45) 00:10:03.264 9948.839 - 10001.478: 90.3183% ( 43) 00:10:03.264 10001.478 - 10054.117: 90.6396% ( 44) 00:10:03.264 10054.117 - 10106.757: 90.8879% ( 34) 00:10:03.264 10106.757 - 10159.396: 91.0704% ( 25) 00:10:03.264 10159.396 - 10212.035: 91.2237% ( 21) 00:10:03.264 10212.035 - 10264.675: 91.3478% ( 17) 00:10:03.264 10264.675 - 10317.314: 91.4282% ( 11) 00:10:03.264 10317.314 - 10369.953: 91.5158% ( 12) 00:10:03.264 10369.953 - 10422.593: 91.6326% ( 16) 00:10:03.264 10422.593 - 10475.232: 91.7421% ( 15) 00:10:03.264 10475.232 - 10527.871: 91.8589% ( 16) 00:10:03.264 10527.871 - 10580.511: 91.9393% ( 11) 00:10:03.264 10580.511 - 10633.150: 92.0488% ( 15) 00:10:03.264 10633.150 - 10685.790: 92.2459% ( 27) 00:10:03.264 10685.790 - 10738.429: 92.5088% ( 36) 00:10:03.264 10738.429 - 10791.068: 92.7497% ( 33) 00:10:03.264 10791.068 - 10843.708: 93.0053% ( 35) 00:10:03.264 10843.708 - 10896.347: 93.2535% ( 34) 00:10:03.264 10896.347 - 10948.986: 93.5018% ( 34) 00:10:03.264 10948.986 - 11001.626: 93.7573% ( 35) 00:10:03.264 11001.626 - 11054.265: 94.0055% ( 34) 00:10:03.264 11054.265 - 11106.904: 94.2611% ( 35) 00:10:03.264 11106.904 - 11159.544: 94.4947% ( 32) 00:10:03.264 11159.544 - 11212.183: 94.7211% ( 31) 00:10:03.264 11212.183 - 11264.822: 94.9620% ( 33) 00:10:03.264 11264.822 - 11317.462: 95.1592% ( 27) 00:10:03.264 11317.462 - 11370.101: 95.3855% ( 31) 00:10:03.264 11370.101 - 11422.741: 95.6046% ( 30) 00:10:03.264 11422.741 - 11475.380: 95.8163% ( 29) 00:10:03.264 11475.380 - 11528.019: 95.9477% ( 18) 00:10:03.264 11528.019 - 11580.659: 96.0864% ( 19) 00:10:03.264 11580.659 - 11633.298: 96.1668% ( 11) 00:10:03.264 11633.298 - 11685.937: 96.2544% ( 12) 00:10:03.264 11685.937 - 11738.577: 96.3420% ( 12) 00:10:03.264 11738.577 - 11791.216: 96.4077% ( 9) 00:10:03.264 11791.216 - 11843.855: 96.4588% ( 7) 00:10:03.264 11843.855 - 11896.495: 96.5172% ( 8) 00:10:03.264 11896.495 - 11949.134: 96.5683% ( 7) 00:10:03.264 11949.134 - 12001.773: 96.6268% ( 8) 00:10:03.264 12001.773 - 12054.413: 96.6852% ( 8) 00:10:03.264 12054.413 - 12107.052: 96.7509% ( 9) 00:10:03.264 12107.052 - 12159.692: 96.8166% ( 9) 00:10:03.264 12159.692 - 12212.331: 96.8823% ( 9) 00:10:03.264 12212.331 - 12264.970: 96.9407% ( 8) 00:10:03.264 12264.970 - 12317.610: 96.9918% ( 7) 00:10:03.264 12317.610 - 12370.249: 97.0502% ( 8) 00:10:03.264 12370.249 - 12422.888: 97.1159% ( 9) 00:10:03.264 12422.888 - 12475.528: 97.1744% ( 8) 00:10:03.264 12475.528 - 12528.167: 97.2328% ( 8) 00:10:03.264 12528.167 - 12580.806: 97.2912% ( 8) 00:10:03.264 12580.806 - 12633.446: 97.3496% ( 8) 00:10:03.264 12633.446 - 12686.085: 97.3715% ( 3) 00:10:03.264 12686.085 - 12738.724: 97.3861% ( 2) 00:10:03.264 12738.724 - 12791.364: 97.4007% ( 2) 00:10:03.264 12791.364 - 12844.003: 97.4153% ( 2) 00:10:03.264 12844.003 - 12896.643: 97.4299% ( 2) 00:10:03.264 12896.643 - 12949.282: 97.4445% ( 2) 00:10:03.264 12949.282 - 13001.921: 97.4591% ( 2) 00:10:03.264 13001.921 - 13054.561: 97.4810% ( 3) 00:10:03.264 13054.561 - 13107.200: 97.4956% ( 2) 00:10:03.264 13107.200 - 13159.839: 97.5102% ( 2) 00:10:03.264 13159.839 - 13212.479: 97.5248% ( 2) 00:10:03.264 13212.479 - 13265.118: 97.5394% ( 2) 00:10:03.264 13265.118 - 13317.757: 97.5540% ( 2) 00:10:03.264 13317.757 - 13370.397: 97.5759% ( 3) 00:10:03.264 13370.397 - 13423.036: 97.5905% ( 2) 00:10:03.264 13423.036 - 13475.676: 97.6051% ( 2) 00:10:03.264 13475.676 - 13580.954: 97.6343% ( 4) 00:10:03.264 13580.954 - 13686.233: 97.6562% ( 3) 00:10:03.264 13686.233 - 13791.512: 97.6636% ( 1) 00:10:03.264 14423.184 - 14528.463: 97.6855% ( 3) 00:10:03.264 14528.463 - 14633.741: 97.7220% ( 5) 00:10:03.264 14633.741 - 14739.020: 97.7512% ( 4) 00:10:03.264 14739.020 - 14844.299: 97.7877% ( 5) 00:10:03.264 14844.299 - 14949.578: 97.8242% ( 5) 00:10:03.264 14949.578 - 15054.856: 97.8607% ( 5) 00:10:03.264 15054.856 - 15160.135: 97.8972% ( 5) 00:10:03.264 15160.135 - 15265.414: 97.9337% ( 5) 00:10:03.264 15265.414 - 15370.692: 97.9702% ( 5) 00:10:03.264 15370.692 - 15475.971: 98.0067% ( 5) 00:10:03.264 15475.971 - 15581.250: 98.0432% ( 5) 00:10:03.264 15581.250 - 15686.529: 98.0797% ( 5) 00:10:03.264 15686.529 - 15791.807: 98.1089% ( 4) 00:10:03.264 15791.807 - 15897.086: 98.1308% ( 3) 00:10:03.264 16423.480 - 16528.758: 98.1454% ( 2) 00:10:03.264 16528.758 - 16634.037: 98.1820% ( 5) 00:10:03.264 16634.037 - 16739.316: 98.2404% ( 8) 00:10:03.264 16739.316 - 16844.594: 98.2769% ( 5) 00:10:03.264 16844.594 - 16949.873: 98.3207% ( 6) 00:10:03.264 16949.873 - 17055.152: 98.3791% ( 8) 00:10:03.264 17055.152 - 17160.431: 98.4521% ( 10) 00:10:03.264 17160.431 - 17265.709: 98.5178% ( 9) 00:10:03.264 17265.709 - 17370.988: 98.5908% ( 10) 00:10:03.264 17370.988 - 17476.267: 98.6565% ( 9) 00:10:03.264 17476.267 - 17581.545: 98.7296% ( 10) 00:10:03.264 17581.545 - 17686.824: 98.7588% ( 4) 00:10:03.264 17686.824 - 17792.103: 98.7807% ( 3) 00:10:03.264 17792.103 - 17897.382: 98.8099% ( 4) 00:10:03.264 17897.382 - 18002.660: 98.8610% ( 7) 00:10:03.264 18002.660 - 18107.939: 98.9121% ( 7) 00:10:03.264 18107.939 - 18213.218: 98.9778% ( 9) 00:10:03.264 18213.218 - 18318.496: 99.0435% ( 9) 00:10:03.264 18318.496 - 18423.775: 99.1165% ( 10) 00:10:03.264 18423.775 - 18529.054: 99.1822% ( 9) 00:10:03.264 18529.054 - 18634.333: 99.2407% ( 8) 00:10:03.264 18634.333 - 18739.611: 99.3064% ( 9) 00:10:03.264 18739.611 - 18844.890: 99.3721% ( 9) 00:10:03.264 18844.890 - 18950.169: 99.4232% ( 7) 00:10:03.264 18950.169 - 19055.447: 99.4524% ( 4) 00:10:03.264 19055.447 - 19160.726: 99.4889% ( 5) 00:10:03.264 19160.726 - 19266.005: 99.5254% ( 5) 00:10:03.264 19266.005 - 19371.284: 99.5327% ( 1) 00:10:03.264 23687.711 - 23792.990: 99.5473% ( 2) 00:10:03.264 23792.990 - 23898.268: 99.5692% ( 3) 00:10:03.264 23898.268 - 24003.547: 99.5984% ( 4) 00:10:03.264 24003.547 - 24108.826: 99.6276% ( 4) 00:10:03.264 24108.826 - 24214.104: 99.6568% ( 4) 00:10:03.264 24214.104 - 24319.383: 99.6787% ( 3) 00:10:03.264 24319.383 - 24424.662: 99.7006% ( 3) 00:10:03.264 24424.662 - 24529.941: 99.7298% ( 4) 00:10:03.264 24529.941 - 24635.219: 99.7518% ( 3) 00:10:03.264 24635.219 - 24740.498: 99.7810% ( 4) 00:10:03.264 24740.498 - 24845.777: 99.8029% ( 3) 00:10:03.264 24845.777 - 24951.055: 99.8321% ( 4) 00:10:03.264 24951.055 - 25056.334: 99.8613% ( 4) 00:10:03.264 25056.334 - 25161.613: 99.8832% ( 3) 00:10:03.264 25161.613 - 25266.892: 99.9051% ( 3) 00:10:03.264 25266.892 - 25372.170: 99.9343% ( 4) 00:10:03.264 25372.170 - 25477.449: 99.9562% ( 3) 00:10:03.264 25477.449 - 25582.728: 99.9854% ( 4) 00:10:03.264 25582.728 - 25688.006: 100.0000% ( 2) 00:10:03.264 00:10:03.264 12:03:50 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:04.644 Initializing NVMe Controllers 00:10:04.644 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:04.644 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:04.644 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:04.644 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:04.644 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:04.644 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:04.644 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:04.644 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:04.644 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:04.644 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:04.645 Initialization complete. Launching workers. 00:10:04.645 ======================================================== 00:10:04.645 Latency(us) 00:10:04.645 Device Information : IOPS MiB/s Average min max 00:10:04.645 PCIE (0000:00:10.0) NSID 1 from core 0: 8988.21 105.33 14270.43 8038.37 46871.80 00:10:04.645 PCIE (0000:00:11.0) NSID 1 from core 0: 8988.21 105.33 14245.25 8306.53 44986.02 00:10:04.645 PCIE (0000:00:13.0) NSID 1 from core 0: 8988.21 105.33 14219.97 8480.97 44228.39 00:10:04.645 PCIE (0000:00:12.0) NSID 1 from core 0: 8988.21 105.33 14195.25 8314.90 42608.96 00:10:04.645 PCIE (0000:00:12.0) NSID 2 from core 0: 8988.21 105.33 14170.26 8248.82 40926.45 00:10:04.645 PCIE (0000:00:12.0) NSID 3 from core 0: 8988.21 105.33 14145.03 8121.17 39508.18 00:10:04.645 ======================================================== 00:10:04.645 Total : 53929.25 631.98 14207.70 8038.37 46871.80 00:10:04.645 00:10:04.645 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:04.645 ================================================================================= 00:10:04.645 1.00000% : 9053.969us 00:10:04.645 10.00000% : 10054.117us 00:10:04.645 25.00000% : 11264.822us 00:10:04.645 50.00000% : 13896.790us 00:10:04.645 75.00000% : 16318.201us 00:10:04.645 90.00000% : 18529.054us 00:10:04.645 95.00000% : 19371.284us 00:10:04.645 98.00000% : 21266.300us 00:10:04.645 99.00000% : 36215.878us 00:10:04.645 99.50000% : 45059.290us 00:10:04.645 99.90000% : 46533.192us 00:10:04.645 99.99000% : 46954.307us 00:10:04.645 99.99900% : 46954.307us 00:10:04.645 99.99990% : 46954.307us 00:10:04.645 99.99999% : 46954.307us 00:10:04.645 00:10:04.645 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:04.645 ================================================================================= 00:10:04.645 1.00000% : 9211.888us 00:10:04.645 10.00000% : 10106.757us 00:10:04.645 25.00000% : 11264.822us 00:10:04.645 50.00000% : 13791.512us 00:10:04.645 75.00000% : 16212.922us 00:10:04.645 90.00000% : 18529.054us 00:10:04.645 95.00000% : 19581.841us 00:10:04.645 98.00000% : 21792.694us 00:10:04.645 99.00000% : 35584.206us 00:10:04.645 99.50000% : 43374.831us 00:10:04.645 99.90000% : 44638.175us 00:10:04.645 99.99000% : 45059.290us 00:10:04.645 99.99900% : 45059.290us 00:10:04.645 99.99990% : 45059.290us 00:10:04.645 99.99999% : 45059.290us 00:10:04.645 00:10:04.645 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:04.645 ================================================================================= 00:10:04.645 1.00000% : 8896.051us 00:10:04.645 10.00000% : 10106.757us 00:10:04.645 25.00000% : 11212.183us 00:10:04.645 50.00000% : 13896.790us 00:10:04.645 75.00000% : 16212.922us 00:10:04.645 90.00000% : 18318.496us 00:10:04.645 95.00000% : 19476.562us 00:10:04.645 98.00000% : 21687.415us 00:10:04.645 99.00000% : 34531.418us 00:10:04.645 99.50000% : 42743.158us 00:10:04.645 99.90000% : 44006.503us 00:10:04.645 99.99000% : 44427.618us 00:10:04.645 99.99900% : 44427.618us 00:10:04.645 99.99990% : 44427.618us 00:10:04.645 99.99999% : 44427.618us 00:10:04.645 00:10:04.645 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:04.645 ================================================================================= 00:10:04.645 1.00000% : 8790.773us 00:10:04.645 10.00000% : 10159.396us 00:10:04.645 25.00000% : 11264.822us 00:10:04.645 50.00000% : 13896.790us 00:10:04.645 75.00000% : 16107.643us 00:10:04.645 90.00000% : 18213.218us 00:10:04.645 95.00000% : 19476.562us 00:10:04.645 98.00000% : 21371.579us 00:10:04.645 99.00000% : 32846.959us 00:10:04.645 99.50000% : 41058.699us 00:10:04.645 99.90000% : 42322.043us 00:10:04.645 99.99000% : 42743.158us 00:10:04.645 99.99900% : 42743.158us 00:10:04.645 99.99990% : 42743.158us 00:10:04.645 99.99999% : 42743.158us 00:10:04.645 00:10:04.645 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:04.645 ================================================================================= 00:10:04.645 1.00000% : 9106.609us 00:10:04.645 10.00000% : 10054.117us 00:10:04.645 25.00000% : 11264.822us 00:10:04.645 50.00000% : 13896.790us 00:10:04.645 75.00000% : 16107.643us 00:10:04.645 90.00000% : 18318.496us 00:10:04.645 95.00000% : 19476.562us 00:10:04.645 98.00000% : 21476.858us 00:10:04.645 99.00000% : 31373.057us 00:10:04.645 99.50000% : 39584.797us 00:10:04.645 99.90000% : 40637.584us 00:10:04.645 99.99000% : 41058.699us 00:10:04.645 99.99900% : 41058.699us 00:10:04.645 99.99990% : 41058.699us 00:10:04.645 99.99999% : 41058.699us 00:10:04.645 00:10:04.645 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:04.645 ================================================================================= 00:10:04.645 1.00000% : 9106.609us 00:10:04.645 10.00000% : 10106.757us 00:10:04.645 25.00000% : 11264.822us 00:10:04.645 50.00000% : 13896.790us 00:10:04.645 75.00000% : 16212.922us 00:10:04.645 90.00000% : 18318.496us 00:10:04.645 95.00000% : 19476.562us 00:10:04.645 98.00000% : 21266.300us 00:10:04.645 99.00000% : 29688.598us 00:10:04.645 99.50000% : 37900.337us 00:10:04.645 99.90000% : 39374.239us 00:10:04.645 99.99000% : 39584.797us 00:10:04.645 99.99900% : 39584.797us 00:10:04.645 99.99990% : 39584.797us 00:10:04.645 99.99999% : 39584.797us 00:10:04.645 00:10:04.645 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:04.645 ============================================================================== 00:10:04.645 Range in us Cumulative IO count 00:10:04.645 8001.182 - 8053.822: 0.0111% ( 1) 00:10:04.645 8211.740 - 8264.379: 0.0554% ( 4) 00:10:04.645 8264.379 - 8317.018: 0.1551% ( 9) 00:10:04.645 8317.018 - 8369.658: 0.2660% ( 10) 00:10:04.645 8369.658 - 8422.297: 0.3657% ( 9) 00:10:04.645 8422.297 - 8474.937: 0.4543% ( 8) 00:10:04.645 8474.937 - 8527.576: 0.4765% ( 2) 00:10:04.645 8527.576 - 8580.215: 0.4876% ( 1) 00:10:04.645 8580.215 - 8632.855: 0.5319% ( 4) 00:10:04.645 8632.855 - 8685.494: 0.5541% ( 2) 00:10:04.645 8685.494 - 8738.133: 0.5652% ( 1) 00:10:04.645 8738.133 - 8790.773: 0.5984% ( 3) 00:10:04.645 8790.773 - 8843.412: 0.6095% ( 1) 00:10:04.645 8843.412 - 8896.051: 0.6538% ( 4) 00:10:04.645 8896.051 - 8948.691: 0.7868% ( 12) 00:10:04.645 8948.691 - 9001.330: 0.9198% ( 12) 00:10:04.645 9001.330 - 9053.969: 1.0527% ( 12) 00:10:04.645 9053.969 - 9106.609: 1.2744% ( 20) 00:10:04.645 9106.609 - 9159.248: 1.5957% ( 29) 00:10:04.645 9159.248 - 9211.888: 1.8949% ( 27) 00:10:04.645 9211.888 - 9264.527: 2.3382% ( 40) 00:10:04.645 9264.527 - 9317.166: 2.7150% ( 34) 00:10:04.645 9317.166 - 9369.806: 3.0253% ( 28) 00:10:04.645 9369.806 - 9422.445: 3.4020% ( 34) 00:10:04.645 9422.445 - 9475.084: 3.7012% ( 27) 00:10:04.645 9475.084 - 9527.724: 4.2664% ( 51) 00:10:04.645 9527.724 - 9580.363: 4.9978% ( 66) 00:10:04.645 9580.363 - 9633.002: 5.3635% ( 33) 00:10:04.645 9633.002 - 9685.642: 5.8400% ( 43) 00:10:04.645 9685.642 - 9738.281: 6.2500% ( 37) 00:10:04.645 9738.281 - 9790.920: 6.6711% ( 38) 00:10:04.645 9790.920 - 9843.560: 7.4579% ( 71) 00:10:04.645 9843.560 - 9896.199: 8.5106% ( 95) 00:10:04.645 9896.199 - 9948.839: 8.9539% ( 40) 00:10:04.645 9948.839 - 10001.478: 9.7850% ( 75) 00:10:04.645 10001.478 - 10054.117: 10.6383% ( 77) 00:10:04.645 10054.117 - 10106.757: 11.4805% ( 76) 00:10:04.645 10106.757 - 10159.396: 12.1676% ( 62) 00:10:04.645 10159.396 - 10212.035: 12.8657% ( 63) 00:10:04.645 10212.035 - 10264.675: 13.6082% ( 67) 00:10:04.645 10264.675 - 10317.314: 14.3728% ( 69) 00:10:04.645 10317.314 - 10369.953: 15.1485% ( 70) 00:10:04.645 10369.953 - 10422.593: 15.9020% ( 68) 00:10:04.645 10422.593 - 10475.232: 16.6999% ( 72) 00:10:04.645 10475.232 - 10527.871: 17.1653% ( 42) 00:10:04.645 10527.871 - 10580.511: 17.5975% ( 39) 00:10:04.645 10580.511 - 10633.150: 18.1516% ( 50) 00:10:04.645 10633.150 - 10685.790: 18.5727% ( 38) 00:10:04.645 10685.790 - 10738.429: 18.9605% ( 35) 00:10:04.645 10738.429 - 10791.068: 19.5811% ( 56) 00:10:04.645 10791.068 - 10843.708: 20.1130% ( 48) 00:10:04.645 10843.708 - 10896.347: 20.8666% ( 68) 00:10:04.645 10896.347 - 10948.986: 21.5315% ( 60) 00:10:04.645 10948.986 - 11001.626: 22.1410% ( 55) 00:10:04.646 11001.626 - 11054.265: 22.9167% ( 70) 00:10:04.646 11054.265 - 11106.904: 23.7035% ( 71) 00:10:04.646 11106.904 - 11159.544: 24.1689% ( 42) 00:10:04.646 11159.544 - 11212.183: 24.8227% ( 59) 00:10:04.646 11212.183 - 11264.822: 25.5208% ( 63) 00:10:04.646 11264.822 - 11317.462: 26.1192% ( 54) 00:10:04.646 11317.462 - 11370.101: 26.6290% ( 46) 00:10:04.646 11370.101 - 11422.741: 27.1831% ( 50) 00:10:04.646 11422.741 - 11475.380: 27.7815% ( 54) 00:10:04.646 11475.380 - 11528.019: 28.2358% ( 41) 00:10:04.646 11528.019 - 11580.659: 28.8010% ( 51) 00:10:04.646 11580.659 - 11633.298: 29.4105% ( 55) 00:10:04.646 11633.298 - 11685.937: 29.9645% ( 50) 00:10:04.646 11685.937 - 11738.577: 30.5740% ( 55) 00:10:04.646 11738.577 - 11791.216: 31.0616% ( 44) 00:10:04.646 11791.216 - 11843.855: 31.5381% ( 43) 00:10:04.646 11843.855 - 11896.495: 31.9925% ( 41) 00:10:04.646 11896.495 - 11949.134: 32.4025% ( 37) 00:10:04.646 11949.134 - 12001.773: 32.8125% ( 37) 00:10:04.646 12001.773 - 12054.413: 33.1560% ( 31) 00:10:04.646 12054.413 - 12107.052: 33.4774% ( 29) 00:10:04.646 12107.052 - 12159.692: 33.8209% ( 31) 00:10:04.646 12159.692 - 12212.331: 34.2420% ( 38) 00:10:04.646 12212.331 - 12264.970: 34.4858% ( 22) 00:10:04.646 12264.970 - 12317.610: 34.7296% ( 22) 00:10:04.646 12317.610 - 12370.249: 35.0066% ( 25) 00:10:04.646 12370.249 - 12422.888: 35.2172% ( 19) 00:10:04.646 12422.888 - 12475.528: 35.5940% ( 34) 00:10:04.646 12475.528 - 12528.167: 35.8821% ( 26) 00:10:04.646 12528.167 - 12580.806: 36.4140% ( 48) 00:10:04.646 12580.806 - 12633.446: 36.8794% ( 42) 00:10:04.646 12633.446 - 12686.085: 37.4003% ( 47) 00:10:04.646 12686.085 - 12738.724: 37.7105% ( 28) 00:10:04.646 12738.724 - 12791.364: 38.1427% ( 39) 00:10:04.646 12791.364 - 12844.003: 38.5084% ( 33) 00:10:04.646 12844.003 - 12896.643: 38.8741% ( 33) 00:10:04.646 12896.643 - 12949.282: 39.2287% ( 32) 00:10:04.646 12949.282 - 13001.921: 39.6941% ( 42) 00:10:04.646 13001.921 - 13054.561: 40.1928% ( 45) 00:10:04.646 13054.561 - 13107.200: 40.6915% ( 45) 00:10:04.646 13107.200 - 13159.839: 41.2345% ( 49) 00:10:04.646 13159.839 - 13212.479: 42.2097% ( 88) 00:10:04.646 13212.479 - 13265.118: 42.8635% ( 59) 00:10:04.646 13265.118 - 13317.757: 43.4397% ( 52) 00:10:04.646 13317.757 - 13370.397: 44.2154% ( 70) 00:10:04.646 13370.397 - 13423.036: 44.7141% ( 45) 00:10:04.646 13423.036 - 13475.676: 45.2460% ( 48) 00:10:04.646 13475.676 - 13580.954: 46.5426% ( 117) 00:10:04.646 13580.954 - 13686.233: 47.9942% ( 131) 00:10:04.646 13686.233 - 13791.512: 49.1135% ( 101) 00:10:04.646 13791.512 - 13896.790: 50.1995% ( 98) 00:10:04.646 13896.790 - 14002.069: 51.3298% ( 102) 00:10:04.646 14002.069 - 14107.348: 52.6707% ( 121) 00:10:04.646 14107.348 - 14212.627: 54.0891% ( 128) 00:10:04.646 14212.627 - 14317.905: 55.5075% ( 128) 00:10:04.646 14317.905 - 14423.184: 56.7598% ( 113) 00:10:04.646 14423.184 - 14528.463: 58.1782% ( 128) 00:10:04.646 14528.463 - 14633.741: 59.5523% ( 124) 00:10:04.646 14633.741 - 14739.020: 60.6051% ( 95) 00:10:04.646 14739.020 - 14844.299: 61.5248% ( 83) 00:10:04.646 14844.299 - 14949.578: 62.6330% ( 100) 00:10:04.646 14949.578 - 15054.856: 63.8076% ( 106) 00:10:04.646 15054.856 - 15160.135: 64.7939% ( 89) 00:10:04.646 15160.135 - 15265.414: 65.9574% ( 105) 00:10:04.646 15265.414 - 15370.692: 67.0878% ( 102) 00:10:04.646 15370.692 - 15475.971: 68.3621% ( 115) 00:10:04.646 15475.971 - 15581.250: 69.6809% ( 119) 00:10:04.646 15581.250 - 15686.529: 70.8223% ( 103) 00:10:04.646 15686.529 - 15791.807: 71.6977% ( 79) 00:10:04.646 15791.807 - 15897.086: 72.5510% ( 77) 00:10:04.646 15897.086 - 16002.365: 73.2159% ( 60) 00:10:04.646 16002.365 - 16107.643: 74.0248% ( 73) 00:10:04.646 16107.643 - 16212.922: 74.9446% ( 83) 00:10:04.646 16212.922 - 16318.201: 76.0195% ( 97) 00:10:04.646 16318.201 - 16423.480: 76.9504% ( 84) 00:10:04.646 16423.480 - 16528.758: 77.7593% ( 73) 00:10:04.646 16528.758 - 16634.037: 78.4574% ( 63) 00:10:04.646 16634.037 - 16739.316: 79.3107% ( 77) 00:10:04.646 16739.316 - 16844.594: 80.3191% ( 91) 00:10:04.646 16844.594 - 16949.873: 80.9508% ( 57) 00:10:04.646 16949.873 - 17055.152: 81.5160% ( 51) 00:10:04.646 17055.152 - 17160.431: 82.1587% ( 58) 00:10:04.646 17160.431 - 17265.709: 82.8901% ( 66) 00:10:04.646 17265.709 - 17370.988: 83.4885% ( 54) 00:10:04.646 17370.988 - 17476.267: 84.0869% ( 54) 00:10:04.646 17476.267 - 17581.545: 84.7296% ( 58) 00:10:04.646 17581.545 - 17686.824: 85.3502% ( 56) 00:10:04.646 17686.824 - 17792.103: 86.1480% ( 72) 00:10:04.646 17792.103 - 17897.382: 86.7132% ( 51) 00:10:04.646 17897.382 - 18002.660: 87.2340% ( 47) 00:10:04.646 18002.660 - 18107.939: 87.7660% ( 48) 00:10:04.646 18107.939 - 18213.218: 88.5860% ( 74) 00:10:04.646 18213.218 - 18318.496: 89.2066% ( 56) 00:10:04.646 18318.496 - 18423.775: 89.9379% ( 66) 00:10:04.646 18423.775 - 18529.054: 90.6250% ( 62) 00:10:04.646 18529.054 - 18634.333: 91.2677% ( 58) 00:10:04.646 18634.333 - 18739.611: 91.7775% ( 46) 00:10:04.646 18739.611 - 18844.890: 92.3759% ( 54) 00:10:04.646 18844.890 - 18950.169: 92.8856% ( 46) 00:10:04.646 18950.169 - 19055.447: 93.3843% ( 45) 00:10:04.646 19055.447 - 19160.726: 93.9716% ( 53) 00:10:04.646 19160.726 - 19266.005: 94.4703% ( 45) 00:10:04.646 19266.005 - 19371.284: 95.0465% ( 52) 00:10:04.646 19371.284 - 19476.562: 95.5563% ( 46) 00:10:04.646 19476.562 - 19581.841: 95.9885% ( 39) 00:10:04.646 19581.841 - 19687.120: 96.2877% ( 27) 00:10:04.646 19687.120 - 19792.398: 96.5093% ( 20) 00:10:04.646 19792.398 - 19897.677: 96.6090% ( 9) 00:10:04.646 19897.677 - 20002.956: 96.7309% ( 11) 00:10:04.646 20002.956 - 20108.235: 96.9193% ( 17) 00:10:04.646 20108.235 - 20213.513: 97.1077% ( 17) 00:10:04.646 20213.513 - 20318.792: 97.2407% ( 12) 00:10:04.646 20318.792 - 20424.071: 97.3515% ( 10) 00:10:04.646 20424.071 - 20529.349: 97.4734% ( 11) 00:10:04.646 20529.349 - 20634.628: 97.6064% ( 12) 00:10:04.646 20634.628 - 20739.907: 97.6950% ( 8) 00:10:04.646 20739.907 - 20845.186: 97.7504% ( 5) 00:10:04.646 20845.186 - 20950.464: 97.7726% ( 2) 00:10:04.646 20950.464 - 21055.743: 97.7948% ( 2) 00:10:04.646 21055.743 - 21161.022: 97.8834% ( 8) 00:10:04.646 21161.022 - 21266.300: 98.0607% ( 16) 00:10:04.646 21266.300 - 21371.579: 98.1051% ( 4) 00:10:04.646 21371.579 - 21476.858: 98.1494% ( 4) 00:10:04.646 21476.858 - 21582.137: 98.1605% ( 1) 00:10:04.646 21687.415 - 21792.694: 98.2048% ( 4) 00:10:04.646 21792.694 - 21897.973: 98.2159% ( 1) 00:10:04.646 21897.973 - 22003.251: 98.2713% ( 5) 00:10:04.646 22003.251 - 22108.530: 98.2934% ( 2) 00:10:04.646 22108.530 - 22213.809: 98.3488% ( 5) 00:10:04.646 22213.809 - 22319.088: 98.3710% ( 2) 00:10:04.646 22319.088 - 22424.366: 98.4153% ( 4) 00:10:04.646 22529.645 - 22634.924: 98.4375% ( 2) 00:10:04.646 22634.924 - 22740.202: 98.4597% ( 2) 00:10:04.646 22740.202 - 22845.481: 98.5040% ( 4) 00:10:04.646 22845.481 - 22950.760: 98.5151% ( 1) 00:10:04.646 22950.760 - 23056.039: 98.5594% ( 4) 00:10:04.646 23056.039 - 23161.317: 98.5816% ( 2) 00:10:04.646 33899.746 - 34110.304: 98.5926% ( 1) 00:10:04.646 34110.304 - 34320.861: 98.6259% ( 3) 00:10:04.646 34320.861 - 34531.418: 98.6702% ( 4) 00:10:04.646 34531.418 - 34741.976: 98.7145% ( 4) 00:10:04.646 34741.976 - 34952.533: 98.7589% ( 4) 00:10:04.646 34952.533 - 35163.091: 98.8143% ( 5) 00:10:04.646 35163.091 - 35373.648: 98.8586% ( 4) 00:10:04.646 35373.648 - 35584.206: 98.9029% ( 4) 00:10:04.646 35584.206 - 35794.763: 98.9473% ( 4) 00:10:04.646 35794.763 - 36005.320: 98.9916% ( 4) 00:10:04.646 36005.320 - 36215.878: 99.0470% ( 5) 00:10:04.646 36215.878 - 36426.435: 99.0802% ( 3) 00:10:04.646 36426.435 - 36636.993: 99.1246% ( 4) 00:10:04.646 36636.993 - 36847.550: 99.1800% ( 5) 00:10:04.646 36847.550 - 37058.108: 99.2243% ( 4) 00:10:04.646 37058.108 - 37268.665: 99.2686% ( 4) 00:10:04.646 37268.665 - 37479.222: 99.2908% ( 2) 00:10:04.646 44217.060 - 44427.618: 99.3351% ( 4) 00:10:04.646 44427.618 - 44638.175: 99.3794% ( 4) 00:10:04.646 44638.175 - 44848.733: 99.4459% ( 6) 00:10:04.646 44848.733 - 45059.290: 99.5013% ( 5) 00:10:04.646 45059.290 - 45269.847: 99.5567% ( 5) 00:10:04.646 45269.847 - 45480.405: 99.6121% ( 5) 00:10:04.647 45480.405 - 45690.962: 99.6786% ( 6) 00:10:04.647 45690.962 - 45901.520: 99.7230% ( 4) 00:10:04.647 45901.520 - 46112.077: 99.7784% ( 5) 00:10:04.647 46112.077 - 46322.635: 99.8338% ( 5) 00:10:04.647 46322.635 - 46533.192: 99.9003% ( 6) 00:10:04.647 46533.192 - 46743.749: 99.9557% ( 5) 00:10:04.647 46743.749 - 46954.307: 100.0000% ( 4) 00:10:04.647 00:10:04.647 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:04.647 ============================================================================== 00:10:04.647 Range in us Cumulative IO count 00:10:04.647 8264.379 - 8317.018: 0.0111% ( 1) 00:10:04.647 8317.018 - 8369.658: 0.0554% ( 4) 00:10:04.647 8369.658 - 8422.297: 0.1219% ( 6) 00:10:04.647 8422.297 - 8474.937: 0.2105% ( 8) 00:10:04.647 8474.937 - 8527.576: 0.3546% ( 13) 00:10:04.647 8527.576 - 8580.215: 0.4654% ( 10) 00:10:04.647 8580.215 - 8632.855: 0.5319% ( 6) 00:10:04.647 8632.855 - 8685.494: 0.6095% ( 7) 00:10:04.647 8685.494 - 8738.133: 0.6760% ( 6) 00:10:04.647 8738.133 - 8790.773: 0.7092% ( 3) 00:10:04.647 8896.051 - 8948.691: 0.7203% ( 1) 00:10:04.647 8948.691 - 9001.330: 0.7314% ( 1) 00:10:04.647 9053.969 - 9106.609: 0.7868% ( 5) 00:10:04.647 9106.609 - 9159.248: 0.9863% ( 18) 00:10:04.647 9159.248 - 9211.888: 1.2522% ( 24) 00:10:04.647 9211.888 - 9264.527: 1.6068% ( 32) 00:10:04.647 9264.527 - 9317.166: 2.1387% ( 48) 00:10:04.647 9317.166 - 9369.806: 2.7815% ( 58) 00:10:04.647 9369.806 - 9422.445: 3.4907% ( 64) 00:10:04.647 9422.445 - 9475.084: 4.0115% ( 47) 00:10:04.647 9475.084 - 9527.724: 4.5324% ( 47) 00:10:04.647 9527.724 - 9580.363: 4.9645% ( 39) 00:10:04.647 9580.363 - 9633.002: 5.4743% ( 46) 00:10:04.647 9633.002 - 9685.642: 5.8289% ( 32) 00:10:04.647 9685.642 - 9738.281: 6.1059% ( 25) 00:10:04.647 9738.281 - 9790.920: 6.4384% ( 30) 00:10:04.647 9790.920 - 9843.560: 6.8595% ( 38) 00:10:04.647 9843.560 - 9896.199: 7.3360% ( 43) 00:10:04.647 9896.199 - 9948.839: 7.9233% ( 53) 00:10:04.647 9948.839 - 10001.478: 8.5439% ( 56) 00:10:04.647 10001.478 - 10054.117: 9.3972% ( 77) 00:10:04.647 10054.117 - 10106.757: 10.1618% ( 69) 00:10:04.647 10106.757 - 10159.396: 10.9153% ( 68) 00:10:04.647 10159.396 - 10212.035: 11.6135% ( 63) 00:10:04.647 10212.035 - 10264.675: 12.4335% ( 74) 00:10:04.647 10264.675 - 10317.314: 13.2425% ( 73) 00:10:04.647 10317.314 - 10369.953: 14.0071% ( 69) 00:10:04.647 10369.953 - 10422.593: 14.9047% ( 81) 00:10:04.647 10422.593 - 10475.232: 15.7026% ( 72) 00:10:04.647 10475.232 - 10527.871: 16.4672% ( 69) 00:10:04.647 10527.871 - 10580.511: 17.2651% ( 72) 00:10:04.647 10580.511 - 10633.150: 17.8967% ( 57) 00:10:04.647 10633.150 - 10685.790: 18.3732% ( 43) 00:10:04.647 10685.790 - 10738.429: 18.9162% ( 49) 00:10:04.647 10738.429 - 10791.068: 19.4038% ( 44) 00:10:04.647 10791.068 - 10843.708: 19.9468% ( 49) 00:10:04.647 10843.708 - 10896.347: 20.6006% ( 59) 00:10:04.647 10896.347 - 10948.986: 21.3542% ( 68) 00:10:04.647 10948.986 - 11001.626: 22.1520% ( 72) 00:10:04.647 11001.626 - 11054.265: 22.8391% ( 62) 00:10:04.647 11054.265 - 11106.904: 23.5816% ( 67) 00:10:04.647 11106.904 - 11159.544: 24.3019% ( 65) 00:10:04.647 11159.544 - 11212.183: 24.9335% ( 57) 00:10:04.647 11212.183 - 11264.822: 25.8090% ( 79) 00:10:04.647 11264.822 - 11317.462: 26.5403% ( 66) 00:10:04.647 11317.462 - 11370.101: 27.2717% ( 66) 00:10:04.647 11370.101 - 11422.741: 27.9809% ( 64) 00:10:04.647 11422.741 - 11475.380: 28.5904% ( 55) 00:10:04.647 11475.380 - 11528.019: 29.2775% ( 62) 00:10:04.647 11528.019 - 11580.659: 29.8094% ( 48) 00:10:04.647 11580.659 - 11633.298: 30.2305% ( 38) 00:10:04.647 11633.298 - 11685.937: 30.6294% ( 36) 00:10:04.647 11685.937 - 11738.577: 31.1835% ( 50) 00:10:04.647 11738.577 - 11791.216: 31.5049% ( 29) 00:10:04.647 11791.216 - 11843.855: 31.8706% ( 33) 00:10:04.647 11843.855 - 11896.495: 32.1809% ( 28) 00:10:04.647 11896.495 - 11949.134: 32.5576% ( 34) 00:10:04.647 11949.134 - 12001.773: 32.8568% ( 27) 00:10:04.647 12001.773 - 12054.413: 33.1339% ( 25) 00:10:04.647 12054.413 - 12107.052: 33.4885% ( 32) 00:10:04.647 12107.052 - 12159.692: 33.8985% ( 37) 00:10:04.647 12159.692 - 12212.331: 34.4415% ( 49) 00:10:04.647 12212.331 - 12264.970: 34.9180% ( 43) 00:10:04.647 12264.970 - 12317.610: 35.2283% ( 28) 00:10:04.647 12317.610 - 12370.249: 35.4499% ( 20) 00:10:04.647 12370.249 - 12422.888: 35.6715% ( 20) 00:10:04.647 12422.888 - 12475.528: 36.0040% ( 30) 00:10:04.647 12475.528 - 12528.167: 36.3586% ( 32) 00:10:04.647 12528.167 - 12580.806: 36.6800% ( 29) 00:10:04.647 12580.806 - 12633.446: 37.0678% ( 35) 00:10:04.647 12633.446 - 12686.085: 37.4778% ( 37) 00:10:04.647 12686.085 - 12738.724: 37.9211% ( 40) 00:10:04.647 12738.724 - 12791.364: 38.3865% ( 42) 00:10:04.647 12791.364 - 12844.003: 38.9184% ( 48) 00:10:04.647 12844.003 - 12896.643: 39.2952% ( 34) 00:10:04.647 12896.643 - 12949.282: 39.6498% ( 32) 00:10:04.647 12949.282 - 13001.921: 40.0155% ( 33) 00:10:04.647 13001.921 - 13054.561: 40.5474% ( 48) 00:10:04.647 13054.561 - 13107.200: 40.8910% ( 31) 00:10:04.647 13107.200 - 13159.839: 41.5226% ( 57) 00:10:04.647 13159.839 - 13212.479: 42.1764% ( 59) 00:10:04.647 13212.479 - 13265.118: 43.0519% ( 79) 00:10:04.647 13265.118 - 13317.757: 44.0935% ( 94) 00:10:04.647 13317.757 - 13370.397: 45.1241% ( 93) 00:10:04.647 13370.397 - 13423.036: 45.8555% ( 66) 00:10:04.647 13423.036 - 13475.676: 46.4317% ( 52) 00:10:04.647 13475.676 - 13580.954: 47.6729% ( 112) 00:10:04.647 13580.954 - 13686.233: 49.0802% ( 127) 00:10:04.647 13686.233 - 13791.512: 50.2216% ( 103) 00:10:04.647 13791.512 - 13896.790: 51.5293% ( 118) 00:10:04.647 13896.790 - 14002.069: 52.8147% ( 116) 00:10:04.647 14002.069 - 14107.348: 54.0669% ( 113) 00:10:04.647 14107.348 - 14212.627: 55.3413% ( 115) 00:10:04.647 14212.627 - 14317.905: 56.5049% ( 105) 00:10:04.647 14317.905 - 14423.184: 57.4579% ( 86) 00:10:04.647 14423.184 - 14528.463: 58.5550% ( 99) 00:10:04.647 14528.463 - 14633.741: 59.5966% ( 94) 00:10:04.647 14633.741 - 14739.020: 60.8267% ( 111) 00:10:04.647 14739.020 - 14844.299: 62.0235% ( 108) 00:10:04.647 14844.299 - 14949.578: 63.3754% ( 122) 00:10:04.647 14949.578 - 15054.856: 64.7939% ( 128) 00:10:04.647 15054.856 - 15160.135: 66.1680% ( 124) 00:10:04.647 15160.135 - 15265.414: 67.5754% ( 127) 00:10:04.647 15265.414 - 15370.692: 68.7168% ( 103) 00:10:04.647 15370.692 - 15475.971: 69.6587% ( 85) 00:10:04.647 15475.971 - 15581.250: 70.6006% ( 85) 00:10:04.647 15581.250 - 15686.529: 71.2212% ( 56) 00:10:04.647 15686.529 - 15791.807: 71.8196% ( 54) 00:10:04.647 15791.807 - 15897.086: 72.4845% ( 60) 00:10:04.647 15897.086 - 16002.365: 73.2491% ( 69) 00:10:04.647 16002.365 - 16107.643: 74.2575% ( 91) 00:10:04.647 16107.643 - 16212.922: 75.2549% ( 90) 00:10:04.647 16212.922 - 16318.201: 76.2522% ( 90) 00:10:04.647 16318.201 - 16423.480: 77.3382% ( 98) 00:10:04.647 16423.480 - 16528.758: 78.0807% ( 67) 00:10:04.647 16528.758 - 16634.037: 78.7234% ( 58) 00:10:04.647 16634.037 - 16739.316: 79.4437% ( 65) 00:10:04.647 16739.316 - 16844.594: 79.9867% ( 49) 00:10:04.647 16844.594 - 16949.873: 80.7070% ( 65) 00:10:04.647 16949.873 - 17055.152: 81.5935% ( 80) 00:10:04.647 17055.152 - 17160.431: 82.4468% ( 77) 00:10:04.647 17160.431 - 17265.709: 83.1228% ( 61) 00:10:04.647 17265.709 - 17370.988: 83.8652% ( 67) 00:10:04.647 17370.988 - 17476.267: 84.6742% ( 73) 00:10:04.647 17476.267 - 17581.545: 85.3613% ( 62) 00:10:04.647 17581.545 - 17686.824: 86.1591% ( 72) 00:10:04.647 17686.824 - 17792.103: 86.7021% ( 49) 00:10:04.647 17792.103 - 17897.382: 87.2340% ( 48) 00:10:04.647 17897.382 - 18002.660: 87.6108% ( 34) 00:10:04.647 18002.660 - 18107.939: 87.9765% ( 33) 00:10:04.647 18107.939 - 18213.218: 88.5084% ( 48) 00:10:04.647 18213.218 - 18318.496: 89.0736% ( 51) 00:10:04.647 18318.496 - 18423.775: 89.5279% ( 41) 00:10:04.647 18423.775 - 18529.054: 90.0820% ( 50) 00:10:04.647 18529.054 - 18634.333: 90.6915% ( 55) 00:10:04.647 18634.333 - 18739.611: 91.2012% ( 46) 00:10:04.647 18739.611 - 18844.890: 91.5891% ( 35) 00:10:04.647 18844.890 - 18950.169: 92.0656% ( 43) 00:10:04.648 18950.169 - 19055.447: 92.5643% ( 45) 00:10:04.648 19055.447 - 19160.726: 93.1073% ( 49) 00:10:04.648 19160.726 - 19266.005: 93.5505% ( 40) 00:10:04.648 19266.005 - 19371.284: 94.1157% ( 51) 00:10:04.648 19371.284 - 19476.562: 94.6254% ( 46) 00:10:04.648 19476.562 - 19581.841: 95.2017% ( 52) 00:10:04.648 19581.841 - 19687.120: 95.6671% ( 42) 00:10:04.648 19687.120 - 19792.398: 95.9663% ( 27) 00:10:04.648 19792.398 - 19897.677: 96.1436% ( 16) 00:10:04.648 19897.677 - 20002.956: 96.3542% ( 19) 00:10:04.648 20002.956 - 20108.235: 96.5426% ( 17) 00:10:04.648 20108.235 - 20213.513: 96.7531% ( 19) 00:10:04.648 20213.513 - 20318.792: 96.9526% ( 18) 00:10:04.648 20318.792 - 20424.071: 97.0855% ( 12) 00:10:04.648 20424.071 - 20529.349: 97.2296% ( 13) 00:10:04.648 20529.349 - 20634.628: 97.3737% ( 13) 00:10:04.648 20634.628 - 20739.907: 97.4734% ( 9) 00:10:04.648 20739.907 - 20845.186: 97.5842% ( 10) 00:10:04.648 20845.186 - 20950.464: 97.6840% ( 9) 00:10:04.648 20950.464 - 21055.743: 97.7615% ( 7) 00:10:04.648 21055.743 - 21161.022: 97.8169% ( 5) 00:10:04.648 21161.022 - 21266.300: 97.8613% ( 4) 00:10:04.648 21266.300 - 21371.579: 97.8723% ( 1) 00:10:04.648 21371.579 - 21476.858: 97.8834% ( 1) 00:10:04.648 21476.858 - 21582.137: 97.9167% ( 3) 00:10:04.648 21582.137 - 21687.415: 97.9610% ( 4) 00:10:04.648 21687.415 - 21792.694: 98.0053% ( 4) 00:10:04.648 21792.694 - 21897.973: 98.0386% ( 3) 00:10:04.648 21897.973 - 22003.251: 98.0829% ( 4) 00:10:04.648 22003.251 - 22108.530: 98.1161% ( 3) 00:10:04.648 22108.530 - 22213.809: 98.1605% ( 4) 00:10:04.648 22213.809 - 22319.088: 98.1937% ( 3) 00:10:04.648 22319.088 - 22424.366: 98.2270% ( 3) 00:10:04.648 22424.366 - 22529.645: 98.2602% ( 3) 00:10:04.648 22529.645 - 22634.924: 98.2934% ( 3) 00:10:04.648 22634.924 - 22740.202: 98.3378% ( 4) 00:10:04.648 22740.202 - 22845.481: 98.3710% ( 3) 00:10:04.648 22845.481 - 22950.760: 98.4043% ( 3) 00:10:04.648 22950.760 - 23056.039: 98.4375% ( 3) 00:10:04.648 23056.039 - 23161.317: 98.4707% ( 3) 00:10:04.648 23161.317 - 23266.596: 98.5040% ( 3) 00:10:04.648 23266.596 - 23371.875: 98.5372% ( 3) 00:10:04.648 23371.875 - 23477.153: 98.5816% ( 4) 00:10:04.648 34110.304 - 34320.861: 98.6148% ( 3) 00:10:04.648 34320.861 - 34531.418: 98.6813% ( 6) 00:10:04.648 34531.418 - 34741.976: 98.7478% ( 6) 00:10:04.648 34741.976 - 34952.533: 98.8143% ( 6) 00:10:04.648 34952.533 - 35163.091: 98.8918% ( 7) 00:10:04.648 35163.091 - 35373.648: 98.9583% ( 6) 00:10:04.648 35373.648 - 35584.206: 99.0359% ( 7) 00:10:04.648 35584.206 - 35794.763: 99.1024% ( 6) 00:10:04.648 35794.763 - 36005.320: 99.1800% ( 7) 00:10:04.648 36005.320 - 36215.878: 99.2575% ( 7) 00:10:04.648 36215.878 - 36426.435: 99.2908% ( 3) 00:10:04.648 42532.601 - 42743.158: 99.3129% ( 2) 00:10:04.648 42743.158 - 42953.716: 99.3905% ( 7) 00:10:04.648 42953.716 - 43164.273: 99.4459% ( 5) 00:10:04.648 43164.273 - 43374.831: 99.5013% ( 5) 00:10:04.648 43374.831 - 43585.388: 99.5678% ( 6) 00:10:04.648 43585.388 - 43795.945: 99.6343% ( 6) 00:10:04.648 43795.945 - 44006.503: 99.6897% ( 5) 00:10:04.648 44006.503 - 44217.060: 99.7562% ( 6) 00:10:04.648 44217.060 - 44427.618: 99.8227% ( 6) 00:10:04.648 44427.618 - 44638.175: 99.9003% ( 7) 00:10:04.648 44638.175 - 44848.733: 99.9557% ( 5) 00:10:04.648 44848.733 - 45059.290: 100.0000% ( 4) 00:10:04.648 00:10:04.648 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:04.648 ============================================================================== 00:10:04.648 Range in us Cumulative IO count 00:10:04.648 8474.937 - 8527.576: 0.0443% ( 4) 00:10:04.648 8527.576 - 8580.215: 0.0997% ( 5) 00:10:04.648 8580.215 - 8632.855: 0.1884% ( 8) 00:10:04.648 8632.855 - 8685.494: 0.3435% ( 14) 00:10:04.648 8685.494 - 8738.133: 0.5762% ( 21) 00:10:04.648 8738.133 - 8790.773: 0.7203% ( 13) 00:10:04.648 8790.773 - 8843.412: 0.9198% ( 18) 00:10:04.648 8843.412 - 8896.051: 1.1968% ( 25) 00:10:04.648 8896.051 - 8948.691: 1.3076% ( 10) 00:10:04.648 8948.691 - 9001.330: 1.3520% ( 4) 00:10:04.648 9001.330 - 9053.969: 1.3963% ( 4) 00:10:04.648 9053.969 - 9106.609: 1.4738% ( 7) 00:10:04.648 9106.609 - 9159.248: 1.6955% ( 20) 00:10:04.648 9159.248 - 9211.888: 2.0612% ( 33) 00:10:04.648 9211.888 - 9264.527: 2.4601% ( 36) 00:10:04.648 9264.527 - 9317.166: 3.3688% ( 82) 00:10:04.648 9317.166 - 9369.806: 3.7456% ( 34) 00:10:04.648 9369.806 - 9422.445: 4.0669% ( 29) 00:10:04.648 9422.445 - 9475.084: 4.3661% ( 27) 00:10:04.648 9475.084 - 9527.724: 4.5878% ( 20) 00:10:04.648 9527.724 - 9580.363: 4.7318% ( 13) 00:10:04.648 9580.363 - 9633.002: 4.8870% ( 14) 00:10:04.648 9633.002 - 9685.642: 5.1640% ( 25) 00:10:04.648 9685.642 - 9738.281: 5.4300% ( 24) 00:10:04.648 9738.281 - 9790.920: 5.8067% ( 34) 00:10:04.648 9790.920 - 9843.560: 6.3608% ( 50) 00:10:04.648 9843.560 - 9896.199: 6.9814% ( 56) 00:10:04.648 9896.199 - 9948.839: 8.0563% ( 97) 00:10:04.648 9948.839 - 10001.478: 8.8098% ( 68) 00:10:04.648 10001.478 - 10054.117: 9.5634% ( 68) 00:10:04.648 10054.117 - 10106.757: 10.5386% ( 88) 00:10:04.648 10106.757 - 10159.396: 11.1370% ( 54) 00:10:04.648 10159.396 - 10212.035: 11.8129% ( 61) 00:10:04.648 10212.035 - 10264.675: 12.6662% ( 77) 00:10:04.648 10264.675 - 10317.314: 13.4198% ( 68) 00:10:04.648 10317.314 - 10369.953: 14.0847% ( 60) 00:10:04.648 10369.953 - 10422.593: 14.7939% ( 64) 00:10:04.648 10422.593 - 10475.232: 15.5363% ( 67) 00:10:04.648 10475.232 - 10527.871: 16.6002% ( 96) 00:10:04.648 10527.871 - 10580.511: 17.2429% ( 58) 00:10:04.648 10580.511 - 10633.150: 18.1848% ( 85) 00:10:04.648 10633.150 - 10685.790: 18.9051% ( 65) 00:10:04.648 10685.790 - 10738.429: 19.6587% ( 68) 00:10:04.648 10738.429 - 10791.068: 20.2571% ( 54) 00:10:04.648 10791.068 - 10843.708: 20.7004% ( 40) 00:10:04.648 10843.708 - 10896.347: 21.2212% ( 47) 00:10:04.648 10896.347 - 10948.986: 21.7642% ( 49) 00:10:04.648 10948.986 - 11001.626: 22.2629% ( 45) 00:10:04.648 11001.626 - 11054.265: 22.8723% ( 55) 00:10:04.648 11054.265 - 11106.904: 23.7810% ( 82) 00:10:04.648 11106.904 - 11159.544: 24.5124% ( 66) 00:10:04.648 11159.544 - 11212.183: 25.0776% ( 51) 00:10:04.648 11212.183 - 11264.822: 25.4543% ( 34) 00:10:04.648 11264.822 - 11317.462: 25.7757% ( 29) 00:10:04.648 11317.462 - 11370.101: 26.0638% ( 26) 00:10:04.648 11370.101 - 11422.741: 26.3520% ( 26) 00:10:04.648 11422.741 - 11475.380: 26.6401% ( 26) 00:10:04.648 11475.380 - 11528.019: 26.9614% ( 29) 00:10:04.648 11528.019 - 11580.659: 27.5709% ( 55) 00:10:04.648 11580.659 - 11633.298: 28.1250% ( 50) 00:10:04.648 11633.298 - 11685.937: 28.7566% ( 57) 00:10:04.648 11685.937 - 11738.577: 29.2553% ( 45) 00:10:04.648 11738.577 - 11791.216: 29.8426% ( 53) 00:10:04.648 11791.216 - 11843.855: 30.1973% ( 32) 00:10:04.648 11843.855 - 11896.495: 30.5297% ( 30) 00:10:04.648 11896.495 - 11949.134: 30.8954% ( 33) 00:10:04.648 11949.134 - 12001.773: 31.5049% ( 55) 00:10:04.648 12001.773 - 12054.413: 32.0035% ( 45) 00:10:04.648 12054.413 - 12107.052: 32.5465% ( 49) 00:10:04.648 12107.052 - 12159.692: 33.0895% ( 49) 00:10:04.648 12159.692 - 12212.331: 33.5993% ( 46) 00:10:04.648 12212.331 - 12264.970: 34.1534% ( 50) 00:10:04.648 12264.970 - 12317.610: 34.7296% ( 52) 00:10:04.648 12317.610 - 12370.249: 35.4056% ( 61) 00:10:04.648 12370.249 - 12422.888: 35.8599% ( 41) 00:10:04.648 12422.888 - 12475.528: 36.3143% ( 41) 00:10:04.648 12475.528 - 12528.167: 36.7686% ( 41) 00:10:04.648 12528.167 - 12580.806: 37.1343% ( 33) 00:10:04.648 12580.806 - 12633.446: 37.5222% ( 35) 00:10:04.648 12633.446 - 12686.085: 37.9100% ( 35) 00:10:04.648 12686.085 - 12738.724: 38.3311% ( 38) 00:10:04.648 12738.724 - 12791.364: 38.7855% ( 41) 00:10:04.648 12791.364 - 12844.003: 39.1512% ( 33) 00:10:04.648 12844.003 - 12896.643: 39.5501% ( 36) 00:10:04.648 12896.643 - 12949.282: 39.9269% ( 34) 00:10:04.648 12949.282 - 13001.921: 40.3147% ( 35) 00:10:04.648 13001.921 - 13054.561: 40.8799% ( 51) 00:10:04.648 13054.561 - 13107.200: 41.4894% ( 55) 00:10:04.648 13107.200 - 13159.839: 42.0102% ( 47) 00:10:04.648 13159.839 - 13212.479: 42.6640% ( 59) 00:10:04.648 13212.479 - 13265.118: 43.3067% ( 58) 00:10:04.649 13265.118 - 13317.757: 43.8830% ( 52) 00:10:04.649 13317.757 - 13370.397: 44.4038% ( 47) 00:10:04.649 13370.397 - 13423.036: 44.9246% ( 47) 00:10:04.649 13423.036 - 13475.676: 45.6560% ( 66) 00:10:04.649 13475.676 - 13580.954: 47.3958% ( 157) 00:10:04.649 13580.954 - 13686.233: 48.7699% ( 124) 00:10:04.649 13686.233 - 13791.512: 49.7673% ( 90) 00:10:04.649 13791.512 - 13896.790: 51.3187% ( 140) 00:10:04.649 13896.790 - 14002.069: 52.6374% ( 119) 00:10:04.649 14002.069 - 14107.348: 53.9229% ( 116) 00:10:04.649 14107.348 - 14212.627: 54.9978% ( 97) 00:10:04.649 14212.627 - 14317.905: 56.2057% ( 109) 00:10:04.649 14317.905 - 14423.184: 57.4801% ( 115) 00:10:04.649 14423.184 - 14528.463: 58.8098% ( 120) 00:10:04.649 14528.463 - 14633.741: 59.8848% ( 97) 00:10:04.649 14633.741 - 14739.020: 61.1037% ( 110) 00:10:04.649 14739.020 - 14844.299: 62.5443% ( 130) 00:10:04.649 14844.299 - 14949.578: 63.8741% ( 120) 00:10:04.649 14949.578 - 15054.856: 64.9490% ( 97) 00:10:04.649 15054.856 - 15160.135: 66.1902% ( 112) 00:10:04.649 15160.135 - 15265.414: 67.3759% ( 107) 00:10:04.649 15265.414 - 15370.692: 68.6281% ( 113) 00:10:04.649 15370.692 - 15475.971: 69.7252% ( 99) 00:10:04.649 15475.971 - 15581.250: 70.5452% ( 74) 00:10:04.649 15581.250 - 15686.529: 71.2323% ( 62) 00:10:04.649 15686.529 - 15791.807: 71.8639% ( 57) 00:10:04.649 15791.807 - 15897.086: 72.4402% ( 52) 00:10:04.649 15897.086 - 16002.365: 73.2824% ( 76) 00:10:04.649 16002.365 - 16107.643: 74.0581% ( 70) 00:10:04.649 16107.643 - 16212.922: 75.1219% ( 96) 00:10:04.649 16212.922 - 16318.201: 75.8754% ( 68) 00:10:04.649 16318.201 - 16423.480: 76.6955% ( 74) 00:10:04.649 16423.480 - 16528.758: 77.4379% ( 67) 00:10:04.649 16528.758 - 16634.037: 78.0696% ( 57) 00:10:04.649 16634.037 - 16739.316: 78.8453% ( 70) 00:10:04.649 16739.316 - 16844.594: 79.7097% ( 78) 00:10:04.649 16844.594 - 16949.873: 80.6405% ( 84) 00:10:04.649 16949.873 - 17055.152: 82.0368% ( 126) 00:10:04.649 17055.152 - 17160.431: 82.9122% ( 79) 00:10:04.649 17160.431 - 17265.709: 83.8985% ( 89) 00:10:04.649 17265.709 - 17370.988: 84.6188% ( 65) 00:10:04.649 17370.988 - 17476.267: 85.2726% ( 59) 00:10:04.649 17476.267 - 17581.545: 85.9707% ( 63) 00:10:04.649 17581.545 - 17686.824: 86.6800% ( 64) 00:10:04.649 17686.824 - 17792.103: 87.4557% ( 70) 00:10:04.649 17792.103 - 17897.382: 87.9654% ( 46) 00:10:04.649 17897.382 - 18002.660: 88.5638% ( 54) 00:10:04.649 18002.660 - 18107.939: 89.2620% ( 63) 00:10:04.649 18107.939 - 18213.218: 89.7717% ( 46) 00:10:04.649 18213.218 - 18318.496: 90.2815% ( 46) 00:10:04.649 18318.496 - 18423.775: 90.9353% ( 59) 00:10:04.649 18423.775 - 18529.054: 91.4894% ( 50) 00:10:04.649 18529.054 - 18634.333: 91.9105% ( 38) 00:10:04.649 18634.333 - 18739.611: 92.3759% ( 42) 00:10:04.649 18739.611 - 18844.890: 92.8191% ( 40) 00:10:04.649 18844.890 - 18950.169: 93.2624% ( 40) 00:10:04.649 18950.169 - 19055.447: 93.6392% ( 34) 00:10:04.649 19055.447 - 19160.726: 94.0270% ( 35) 00:10:04.649 19160.726 - 19266.005: 94.4371% ( 37) 00:10:04.649 19266.005 - 19371.284: 94.7806% ( 31) 00:10:04.649 19371.284 - 19476.562: 95.1020% ( 29) 00:10:04.649 19476.562 - 19581.841: 95.3901% ( 26) 00:10:04.649 19581.841 - 19687.120: 95.6893% ( 27) 00:10:04.649 19687.120 - 19792.398: 95.8777% ( 17) 00:10:04.649 19792.398 - 19897.677: 96.0993% ( 20) 00:10:04.649 19897.677 - 20002.956: 96.3209% ( 20) 00:10:04.649 20002.956 - 20108.235: 96.5536% ( 21) 00:10:04.649 20108.235 - 20213.513: 96.8418% ( 26) 00:10:04.649 20213.513 - 20318.792: 97.0412% ( 18) 00:10:04.649 20318.792 - 20424.071: 97.1964% ( 14) 00:10:04.649 20424.071 - 20529.349: 97.3293% ( 12) 00:10:04.649 20529.349 - 20634.628: 97.4845% ( 14) 00:10:04.649 20634.628 - 20739.907: 97.5510% ( 6) 00:10:04.649 20739.907 - 20845.186: 97.6175% ( 6) 00:10:04.649 20845.186 - 20950.464: 97.7283% ( 10) 00:10:04.649 20950.464 - 21055.743: 97.7615% ( 3) 00:10:04.649 21055.743 - 21161.022: 97.7948% ( 3) 00:10:04.649 21161.022 - 21266.300: 97.8280% ( 3) 00:10:04.649 21266.300 - 21371.579: 97.8723% ( 4) 00:10:04.649 21371.579 - 21476.858: 97.9277% ( 5) 00:10:04.649 21476.858 - 21582.137: 97.9721% ( 4) 00:10:04.649 21582.137 - 21687.415: 98.0053% ( 3) 00:10:04.649 21687.415 - 21792.694: 98.0496% ( 4) 00:10:04.649 21792.694 - 21897.973: 98.0829% ( 3) 00:10:04.649 21897.973 - 22003.251: 98.1272% ( 4) 00:10:04.649 22003.251 - 22108.530: 98.1605% ( 3) 00:10:04.649 22108.530 - 22213.809: 98.2048% ( 4) 00:10:04.649 22213.809 - 22319.088: 98.2491% ( 4) 00:10:04.649 22319.088 - 22424.366: 98.2824% ( 3) 00:10:04.649 22424.366 - 22529.645: 98.3156% ( 3) 00:10:04.649 22529.645 - 22634.924: 98.3488% ( 3) 00:10:04.649 22634.924 - 22740.202: 98.3932% ( 4) 00:10:04.649 22740.202 - 22845.481: 98.4264% ( 3) 00:10:04.649 22845.481 - 22950.760: 98.4597% ( 3) 00:10:04.649 22950.760 - 23056.039: 98.5040% ( 4) 00:10:04.649 23056.039 - 23161.317: 98.5372% ( 3) 00:10:04.649 23161.317 - 23266.596: 98.5816% ( 4) 00:10:04.649 33057.516 - 33268.074: 98.5926% ( 1) 00:10:04.649 33268.074 - 33478.631: 98.6702% ( 7) 00:10:04.649 33478.631 - 33689.189: 98.7367% ( 6) 00:10:04.649 33689.189 - 33899.746: 98.8143% ( 7) 00:10:04.649 33899.746 - 34110.304: 98.8918% ( 7) 00:10:04.649 34110.304 - 34320.861: 98.9694% ( 7) 00:10:04.649 34320.861 - 34531.418: 99.0470% ( 7) 00:10:04.649 34531.418 - 34741.976: 99.1024% ( 5) 00:10:04.649 34741.976 - 34952.533: 99.1800% ( 7) 00:10:04.649 34952.533 - 35163.091: 99.2575% ( 7) 00:10:04.649 35163.091 - 35373.648: 99.2908% ( 3) 00:10:04.649 41690.371 - 41900.929: 99.3019% ( 1) 00:10:04.649 41900.929 - 42111.486: 99.3684% ( 6) 00:10:04.649 42111.486 - 42322.043: 99.4238% ( 5) 00:10:04.649 42322.043 - 42532.601: 99.4681% ( 4) 00:10:04.649 42532.601 - 42743.158: 99.5346% ( 6) 00:10:04.649 42743.158 - 42953.716: 99.5900% ( 5) 00:10:04.649 42953.716 - 43164.273: 99.6565% ( 6) 00:10:04.649 43164.273 - 43374.831: 99.7340% ( 7) 00:10:04.649 43374.831 - 43585.388: 99.7895% ( 5) 00:10:04.649 43585.388 - 43795.945: 99.8670% ( 7) 00:10:04.649 43795.945 - 44006.503: 99.9335% ( 6) 00:10:04.649 44006.503 - 44217.060: 99.9889% ( 5) 00:10:04.649 44217.060 - 44427.618: 100.0000% ( 1) 00:10:04.649 00:10:04.649 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:04.649 ============================================================================== 00:10:04.649 Range in us Cumulative IO count 00:10:04.649 8264.379 - 8317.018: 0.0111% ( 1) 00:10:04.649 8369.658 - 8422.297: 0.0222% ( 1) 00:10:04.649 8422.297 - 8474.937: 0.0332% ( 1) 00:10:04.649 8474.937 - 8527.576: 0.1662% ( 12) 00:10:04.649 8527.576 - 8580.215: 0.3214% ( 14) 00:10:04.649 8580.215 - 8632.855: 0.5098% ( 17) 00:10:04.649 8632.855 - 8685.494: 0.7757% ( 24) 00:10:04.649 8685.494 - 8738.133: 0.9973% ( 20) 00:10:04.649 8738.133 - 8790.773: 1.1414% ( 13) 00:10:04.649 8790.773 - 8843.412: 1.2744% ( 12) 00:10:04.649 8843.412 - 8896.051: 1.3963% ( 11) 00:10:04.649 8896.051 - 8948.691: 1.5293% ( 12) 00:10:04.649 8948.691 - 9001.330: 1.6179% ( 8) 00:10:04.649 9001.330 - 9053.969: 1.7620% ( 13) 00:10:04.649 9053.969 - 9106.609: 2.0168% ( 23) 00:10:04.649 9106.609 - 9159.248: 2.2828% ( 24) 00:10:04.649 9159.248 - 9211.888: 2.5488% ( 24) 00:10:04.649 9211.888 - 9264.527: 2.9809% ( 39) 00:10:04.649 9264.527 - 9317.166: 3.3577% ( 34) 00:10:04.649 9317.166 - 9369.806: 3.6126% ( 23) 00:10:04.649 9369.806 - 9422.445: 3.9118% ( 27) 00:10:04.649 9422.445 - 9475.084: 4.0891% ( 16) 00:10:04.649 9475.084 - 9527.724: 4.1667% ( 7) 00:10:04.649 9527.724 - 9580.363: 4.2664% ( 9) 00:10:04.649 9580.363 - 9633.002: 4.3883% ( 11) 00:10:04.649 9633.002 - 9685.642: 4.5656% ( 16) 00:10:04.650 9685.642 - 9738.281: 4.7651% ( 18) 00:10:04.650 9738.281 - 9790.920: 5.1418% ( 34) 00:10:04.650 9790.920 - 9843.560: 5.7846% ( 58) 00:10:04.650 9843.560 - 9896.199: 6.3719% ( 53) 00:10:04.650 9896.199 - 9948.839: 7.2030% ( 75) 00:10:04.650 9948.839 - 10001.478: 8.0674% ( 78) 00:10:04.650 10001.478 - 10054.117: 8.9539% ( 80) 00:10:04.650 10054.117 - 10106.757: 9.9956% ( 94) 00:10:04.650 10106.757 - 10159.396: 11.0705% ( 97) 00:10:04.650 10159.396 - 10212.035: 12.0678% ( 90) 00:10:04.650 10212.035 - 10264.675: 13.0098% ( 85) 00:10:04.650 10264.675 - 10317.314: 14.1179% ( 100) 00:10:04.650 10317.314 - 10369.953: 14.8936% ( 70) 00:10:04.650 10369.953 - 10422.593: 15.5807% ( 62) 00:10:04.650 10422.593 - 10475.232: 16.1902% ( 55) 00:10:04.650 10475.232 - 10527.871: 16.9105% ( 65) 00:10:04.650 10527.871 - 10580.511: 17.5199% ( 55) 00:10:04.650 10580.511 - 10633.150: 17.9189% ( 36) 00:10:04.650 10633.150 - 10685.790: 18.2513% ( 30) 00:10:04.650 10685.790 - 10738.429: 18.7389% ( 44) 00:10:04.650 10738.429 - 10791.068: 19.1711% ( 39) 00:10:04.650 10791.068 - 10843.708: 19.5700% ( 36) 00:10:04.650 10843.708 - 10896.347: 19.9579% ( 35) 00:10:04.650 10896.347 - 10948.986: 20.5009% ( 49) 00:10:04.650 10948.986 - 11001.626: 21.0217% ( 47) 00:10:04.650 11001.626 - 11054.265: 21.5315% ( 46) 00:10:04.650 11054.265 - 11106.904: 22.3958% ( 78) 00:10:04.650 11106.904 - 11159.544: 23.3156% ( 83) 00:10:04.650 11159.544 - 11212.183: 24.1135% ( 72) 00:10:04.650 11212.183 - 11264.822: 25.1995% ( 98) 00:10:04.650 11264.822 - 11317.462: 26.0195% ( 74) 00:10:04.650 11317.462 - 11370.101: 26.6179% ( 54) 00:10:04.650 11370.101 - 11422.741: 27.1831% ( 51) 00:10:04.650 11422.741 - 11475.380: 27.8812% ( 63) 00:10:04.650 11475.380 - 11528.019: 28.3466% ( 42) 00:10:04.650 11528.019 - 11580.659: 28.9672% ( 56) 00:10:04.650 11580.659 - 11633.298: 29.4326% ( 42) 00:10:04.650 11633.298 - 11685.937: 29.7207% ( 26) 00:10:04.650 11685.937 - 11738.577: 30.0421% ( 29) 00:10:04.650 11738.577 - 11791.216: 30.4632% ( 38) 00:10:04.650 11791.216 - 11843.855: 30.6848% ( 20) 00:10:04.650 11843.855 - 11896.495: 30.9619% ( 25) 00:10:04.650 11896.495 - 11949.134: 31.2389% ( 25) 00:10:04.650 11949.134 - 12001.773: 31.5492% ( 28) 00:10:04.650 12001.773 - 12054.413: 31.7708% ( 20) 00:10:04.650 12054.413 - 12107.052: 31.9481% ( 16) 00:10:04.650 12107.052 - 12159.692: 32.2030% ( 23) 00:10:04.650 12159.692 - 12212.331: 32.4690% ( 24) 00:10:04.650 12212.331 - 12264.970: 32.7571% ( 26) 00:10:04.650 12264.970 - 12317.610: 33.1339% ( 34) 00:10:04.650 12317.610 - 12370.249: 33.5660% ( 39) 00:10:04.650 12370.249 - 12422.888: 34.0536% ( 44) 00:10:04.650 12422.888 - 12475.528: 34.7518% ( 63) 00:10:04.650 12475.528 - 12528.167: 35.5164% ( 69) 00:10:04.650 12528.167 - 12580.806: 36.0040% ( 44) 00:10:04.650 12580.806 - 12633.446: 36.4362% ( 39) 00:10:04.650 12633.446 - 12686.085: 36.9348% ( 45) 00:10:04.650 12686.085 - 12738.724: 37.2784% ( 31) 00:10:04.650 12738.724 - 12791.364: 37.6219% ( 31) 00:10:04.650 12791.364 - 12844.003: 37.9876% ( 33) 00:10:04.650 12844.003 - 12896.643: 38.5084% ( 47) 00:10:04.650 12896.643 - 12949.282: 39.0293% ( 47) 00:10:04.650 12949.282 - 13001.921: 39.6609% ( 57) 00:10:04.650 13001.921 - 13054.561: 40.4255% ( 69) 00:10:04.650 13054.561 - 13107.200: 41.2788% ( 77) 00:10:04.650 13107.200 - 13159.839: 41.9659% ( 62) 00:10:04.650 13159.839 - 13212.479: 42.6418% ( 61) 00:10:04.650 13212.479 - 13265.118: 43.2181% ( 52) 00:10:04.650 13265.118 - 13317.757: 43.7278% ( 46) 00:10:04.650 13317.757 - 13370.397: 44.2930% ( 51) 00:10:04.650 13370.397 - 13423.036: 44.9690% ( 61) 00:10:04.650 13423.036 - 13475.676: 45.5895% ( 56) 00:10:04.650 13475.676 - 13580.954: 46.9415% ( 122) 00:10:04.650 13580.954 - 13686.233: 48.3488% ( 127) 00:10:04.650 13686.233 - 13791.512: 49.9668% ( 146) 00:10:04.650 13791.512 - 13896.790: 51.5071% ( 139) 00:10:04.650 13896.790 - 14002.069: 52.9034% ( 126) 00:10:04.650 14002.069 - 14107.348: 54.3329% ( 129) 00:10:04.650 14107.348 - 14212.627: 56.1392% ( 163) 00:10:04.650 14212.627 - 14317.905: 57.5022% ( 123) 00:10:04.650 14317.905 - 14423.184: 58.7877% ( 116) 00:10:04.650 14423.184 - 14528.463: 60.0288% ( 112) 00:10:04.650 14528.463 - 14633.741: 60.9929% ( 87) 00:10:04.650 14633.741 - 14739.020: 62.0235% ( 93) 00:10:04.650 14739.020 - 14844.299: 62.9654% ( 85) 00:10:04.650 14844.299 - 14949.578: 63.6082% ( 58) 00:10:04.650 14949.578 - 15054.856: 64.2841% ( 61) 00:10:04.650 15054.856 - 15160.135: 65.2704% ( 89) 00:10:04.650 15160.135 - 15265.414: 66.4340% ( 105) 00:10:04.650 15265.414 - 15370.692: 67.5754% ( 103) 00:10:04.650 15370.692 - 15475.971: 68.9716% ( 126) 00:10:04.650 15475.971 - 15581.250: 70.3014% ( 120) 00:10:04.650 15581.250 - 15686.529: 71.6312% ( 120) 00:10:04.650 15686.529 - 15791.807: 72.7837% ( 104) 00:10:04.650 15791.807 - 15897.086: 73.7256% ( 85) 00:10:04.650 15897.086 - 16002.365: 74.4570% ( 66) 00:10:04.650 16002.365 - 16107.643: 75.1219% ( 60) 00:10:04.650 16107.643 - 16212.922: 75.8533% ( 66) 00:10:04.650 16212.922 - 16318.201: 76.5293% ( 61) 00:10:04.650 16318.201 - 16423.480: 77.2163% ( 62) 00:10:04.650 16423.480 - 16528.758: 77.8812% ( 60) 00:10:04.650 16528.758 - 16634.037: 78.4464% ( 51) 00:10:04.650 16634.037 - 16739.316: 79.0226% ( 52) 00:10:04.650 16739.316 - 16844.594: 79.6210% ( 54) 00:10:04.650 16844.594 - 16949.873: 80.2748% ( 59) 00:10:04.650 16949.873 - 17055.152: 81.2168% ( 85) 00:10:04.650 17055.152 - 17160.431: 82.1587% ( 85) 00:10:04.650 17160.431 - 17265.709: 82.9122% ( 68) 00:10:04.650 17265.709 - 17370.988: 83.8098% ( 81) 00:10:04.650 17370.988 - 17476.267: 84.6188% ( 73) 00:10:04.650 17476.267 - 17581.545: 85.2948% ( 61) 00:10:04.650 17581.545 - 17686.824: 86.1037% ( 73) 00:10:04.650 17686.824 - 17792.103: 86.9348% ( 75) 00:10:04.650 17792.103 - 17897.382: 87.9100% ( 88) 00:10:04.650 17897.382 - 18002.660: 88.7855% ( 79) 00:10:04.650 18002.660 - 18107.939: 89.5390% ( 68) 00:10:04.650 18107.939 - 18213.218: 90.1374% ( 54) 00:10:04.650 18213.218 - 18318.496: 90.5807% ( 40) 00:10:04.650 18318.496 - 18423.775: 91.1902% ( 55) 00:10:04.650 18423.775 - 18529.054: 91.6999% ( 46) 00:10:04.651 18529.054 - 18634.333: 92.2872% ( 53) 00:10:04.651 18634.333 - 18739.611: 92.9965% ( 64) 00:10:04.651 18739.611 - 18844.890: 93.4840% ( 44) 00:10:04.651 18844.890 - 18950.169: 93.9051% ( 38) 00:10:04.651 18950.169 - 19055.447: 94.2043% ( 27) 00:10:04.651 19055.447 - 19160.726: 94.4260% ( 20) 00:10:04.651 19160.726 - 19266.005: 94.6476% ( 20) 00:10:04.651 19266.005 - 19371.284: 94.9468% ( 27) 00:10:04.651 19371.284 - 19476.562: 95.2017% ( 23) 00:10:04.651 19476.562 - 19581.841: 95.5452% ( 31) 00:10:04.651 19581.841 - 19687.120: 96.0217% ( 43) 00:10:04.651 19687.120 - 19792.398: 96.2323% ( 19) 00:10:04.651 19792.398 - 19897.677: 96.4428% ( 19) 00:10:04.651 19897.677 - 20002.956: 96.6423% ( 18) 00:10:04.651 20002.956 - 20108.235: 96.7974% ( 14) 00:10:04.651 20108.235 - 20213.513: 96.9304% ( 12) 00:10:04.651 20213.513 - 20318.792: 97.0191% ( 8) 00:10:04.651 20318.792 - 20424.071: 97.1077% ( 8) 00:10:04.651 20424.071 - 20529.349: 97.1631% ( 5) 00:10:04.651 20529.349 - 20634.628: 97.2407% ( 7) 00:10:04.651 20634.628 - 20739.907: 97.3293% ( 8) 00:10:04.651 20739.907 - 20845.186: 97.4291% ( 9) 00:10:04.651 20845.186 - 20950.464: 97.5177% ( 8) 00:10:04.651 20950.464 - 21055.743: 97.6618% ( 13) 00:10:04.651 21055.743 - 21161.022: 97.9056% ( 22) 00:10:04.651 21161.022 - 21266.300: 97.9942% ( 8) 00:10:04.651 21266.300 - 21371.579: 98.0718% ( 7) 00:10:04.651 21371.579 - 21476.858: 98.1383% ( 6) 00:10:04.651 21476.858 - 21582.137: 98.1715% ( 3) 00:10:04.651 21582.137 - 21687.415: 98.2159% ( 4) 00:10:04.651 21687.415 - 21792.694: 98.2491% ( 3) 00:10:04.651 21792.694 - 21897.973: 98.2934% ( 4) 00:10:04.651 21897.973 - 22003.251: 98.3378% ( 4) 00:10:04.651 22003.251 - 22108.530: 98.3710% ( 3) 00:10:04.651 22108.530 - 22213.809: 98.4043% ( 3) 00:10:04.651 22213.809 - 22319.088: 98.4375% ( 3) 00:10:04.651 22319.088 - 22424.366: 98.4707% ( 3) 00:10:04.651 22424.366 - 22529.645: 98.5151% ( 4) 00:10:04.651 22529.645 - 22634.924: 98.5483% ( 3) 00:10:04.651 22634.924 - 22740.202: 98.5816% ( 3) 00:10:04.651 31373.057 - 31583.614: 98.5926% ( 1) 00:10:04.651 31583.614 - 31794.172: 98.6591% ( 6) 00:10:04.651 31794.172 - 32004.729: 98.7367% ( 7) 00:10:04.651 32004.729 - 32215.287: 98.8032% ( 6) 00:10:04.651 32215.287 - 32425.844: 98.8808% ( 7) 00:10:04.651 32425.844 - 32636.402: 98.9473% ( 6) 00:10:04.651 32636.402 - 32846.959: 99.0359% ( 8) 00:10:04.651 32846.959 - 33057.516: 99.0802% ( 4) 00:10:04.651 33057.516 - 33268.074: 99.1467% ( 6) 00:10:04.651 33268.074 - 33478.631: 99.2132% ( 6) 00:10:04.651 33478.631 - 33689.189: 99.2797% ( 6) 00:10:04.651 33689.189 - 33899.746: 99.2908% ( 1) 00:10:04.651 40216.469 - 40427.027: 99.3019% ( 1) 00:10:04.651 40427.027 - 40637.584: 99.3684% ( 6) 00:10:04.651 40637.584 - 40848.141: 99.4348% ( 6) 00:10:04.651 40848.141 - 41058.699: 99.5013% ( 6) 00:10:04.651 41058.699 - 41269.256: 99.5789% ( 7) 00:10:04.651 41269.256 - 41479.814: 99.6343% ( 5) 00:10:04.651 41479.814 - 41690.371: 99.7008% ( 6) 00:10:04.651 41690.371 - 41900.929: 99.7673% ( 6) 00:10:04.651 41900.929 - 42111.486: 99.8338% ( 6) 00:10:04.651 42111.486 - 42322.043: 99.9003% ( 6) 00:10:04.651 42322.043 - 42532.601: 99.9668% ( 6) 00:10:04.651 42532.601 - 42743.158: 100.0000% ( 3) 00:10:04.651 00:10:04.651 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:04.651 ============================================================================== 00:10:04.651 Range in us Cumulative IO count 00:10:04.651 8211.740 - 8264.379: 0.0222% ( 2) 00:10:04.651 8264.379 - 8317.018: 0.1108% ( 8) 00:10:04.651 8317.018 - 8369.658: 0.1995% ( 8) 00:10:04.651 8369.658 - 8422.297: 0.2881% ( 8) 00:10:04.651 8422.297 - 8474.937: 0.4654% ( 16) 00:10:04.651 8474.937 - 8527.576: 0.5430% ( 7) 00:10:04.651 8527.576 - 8580.215: 0.6206% ( 7) 00:10:04.651 8580.215 - 8632.855: 0.6649% ( 4) 00:10:04.651 8632.855 - 8685.494: 0.6981% ( 3) 00:10:04.651 8685.494 - 8738.133: 0.7092% ( 1) 00:10:04.651 8896.051 - 8948.691: 0.7425% ( 3) 00:10:04.651 8948.691 - 9001.330: 0.8533% ( 10) 00:10:04.651 9001.330 - 9053.969: 0.9641% ( 10) 00:10:04.651 9053.969 - 9106.609: 1.1192% ( 14) 00:10:04.651 9106.609 - 9159.248: 1.3741% ( 23) 00:10:04.651 9159.248 - 9211.888: 1.6622% ( 26) 00:10:04.651 9211.888 - 9264.527: 2.1720% ( 46) 00:10:04.651 9264.527 - 9317.166: 2.7482% ( 52) 00:10:04.651 9317.166 - 9369.806: 3.3688% ( 56) 00:10:04.651 9369.806 - 9422.445: 3.8342% ( 42) 00:10:04.651 9422.445 - 9475.084: 4.1556% ( 29) 00:10:04.651 9475.084 - 9527.724: 4.3883% ( 21) 00:10:04.651 9527.724 - 9580.363: 4.8426% ( 41) 00:10:04.651 9580.363 - 9633.002: 5.0754% ( 21) 00:10:04.651 9633.002 - 9685.642: 5.4743% ( 36) 00:10:04.651 9685.642 - 9738.281: 5.8732% ( 36) 00:10:04.651 9738.281 - 9790.920: 6.3941% ( 47) 00:10:04.651 9790.920 - 9843.560: 6.9925% ( 54) 00:10:04.651 9843.560 - 9896.199: 7.7128% ( 65) 00:10:04.651 9896.199 - 9948.839: 8.3555% ( 58) 00:10:04.651 9948.839 - 10001.478: 9.2199% ( 78) 00:10:04.651 10001.478 - 10054.117: 10.0288% ( 73) 00:10:04.651 10054.117 - 10106.757: 10.5940% ( 51) 00:10:04.651 10106.757 - 10159.396: 11.3254% ( 66) 00:10:04.651 10159.396 - 10212.035: 12.1676% ( 76) 00:10:04.651 10212.035 - 10264.675: 13.1760% ( 91) 00:10:04.651 10264.675 - 10317.314: 14.1512% ( 88) 00:10:04.651 10317.314 - 10369.953: 15.1817% ( 93) 00:10:04.651 10369.953 - 10422.593: 15.9685% ( 71) 00:10:04.651 10422.593 - 10475.232: 16.6556% ( 62) 00:10:04.651 10475.232 - 10527.871: 17.3094% ( 59) 00:10:04.651 10527.871 - 10580.511: 17.7637% ( 41) 00:10:04.651 10580.511 - 10633.150: 18.2513% ( 44) 00:10:04.651 10633.150 - 10685.790: 18.6059% ( 32) 00:10:04.651 10685.790 - 10738.429: 18.9273% ( 29) 00:10:04.651 10738.429 - 10791.068: 19.4038% ( 43) 00:10:04.651 10791.068 - 10843.708: 20.0022% ( 54) 00:10:04.651 10843.708 - 10896.347: 20.3790% ( 34) 00:10:04.651 10896.347 - 10948.986: 21.0439% ( 60) 00:10:04.651 10948.986 - 11001.626: 21.9747% ( 84) 00:10:04.651 11001.626 - 11054.265: 22.9056% ( 84) 00:10:04.651 11054.265 - 11106.904: 23.7035% ( 72) 00:10:04.651 11106.904 - 11159.544: 24.4238% ( 65) 00:10:04.651 11159.544 - 11212.183: 24.8449% ( 38) 00:10:04.651 11212.183 - 11264.822: 25.5541% ( 64) 00:10:04.651 11264.822 - 11317.462: 26.0306% ( 43) 00:10:04.651 11317.462 - 11370.101: 26.4738% ( 40) 00:10:04.651 11370.101 - 11422.741: 27.0279% ( 50) 00:10:04.651 11422.741 - 11475.380: 27.5377% ( 46) 00:10:04.651 11475.380 - 11528.019: 27.9145% ( 34) 00:10:04.651 11528.019 - 11580.659: 28.2580% ( 31) 00:10:04.651 11580.659 - 11633.298: 28.6902% ( 39) 00:10:04.651 11633.298 - 11685.937: 29.0559% ( 33) 00:10:04.651 11685.937 - 11738.577: 29.4105% ( 32) 00:10:04.651 11738.577 - 11791.216: 29.6875% ( 25) 00:10:04.651 11791.216 - 11843.855: 29.9978% ( 28) 00:10:04.651 11843.855 - 11896.495: 30.5408% ( 49) 00:10:04.651 11896.495 - 11949.134: 31.0505% ( 46) 00:10:04.651 11949.134 - 12001.773: 31.4827% ( 39) 00:10:04.651 12001.773 - 12054.413: 31.8484% ( 33) 00:10:04.651 12054.413 - 12107.052: 32.1809% ( 30) 00:10:04.651 12107.052 - 12159.692: 32.7793% ( 54) 00:10:04.651 12159.692 - 12212.331: 33.2558% ( 43) 00:10:04.651 12212.331 - 12264.970: 33.6879% ( 39) 00:10:04.651 12264.970 - 12317.610: 34.1755% ( 44) 00:10:04.651 12317.610 - 12370.249: 34.8183% ( 58) 00:10:04.651 12370.249 - 12422.888: 35.2948% ( 43) 00:10:04.651 12422.888 - 12475.528: 35.6605% ( 33) 00:10:04.651 12475.528 - 12528.167: 35.8821% ( 20) 00:10:04.651 12528.167 - 12580.806: 36.1480% ( 24) 00:10:04.651 12580.806 - 12633.446: 36.3918% ( 22) 00:10:04.651 12633.446 - 12686.085: 36.5581% ( 15) 00:10:04.651 12686.085 - 12738.724: 36.7686% ( 19) 00:10:04.651 12738.724 - 12791.364: 36.9681% ( 18) 00:10:04.651 12791.364 - 12844.003: 37.1343% ( 15) 00:10:04.651 12844.003 - 12896.643: 37.4335% ( 27) 00:10:04.651 12896.643 - 12949.282: 37.7881% ( 32) 00:10:04.651 12949.282 - 13001.921: 38.4087% ( 56) 00:10:04.651 13001.921 - 13054.561: 38.9738% ( 51) 00:10:04.652 13054.561 - 13107.200: 39.6166% ( 58) 00:10:04.652 13107.200 - 13159.839: 40.2926% ( 61) 00:10:04.652 13159.839 - 13212.479: 41.0350% ( 67) 00:10:04.652 13212.479 - 13265.118: 42.0434% ( 91) 00:10:04.652 13265.118 - 13317.757: 42.7637% ( 65) 00:10:04.652 13317.757 - 13370.397: 43.5727% ( 73) 00:10:04.652 13370.397 - 13423.036: 44.1933% ( 56) 00:10:04.652 13423.036 - 13475.676: 44.8027% ( 55) 00:10:04.652 13475.676 - 13580.954: 46.1104% ( 118) 00:10:04.652 13580.954 - 13686.233: 47.6507% ( 139) 00:10:04.652 13686.233 - 13791.512: 49.0691% ( 128) 00:10:04.652 13791.512 - 13896.790: 50.5430% ( 133) 00:10:04.652 13896.790 - 14002.069: 52.0833% ( 139) 00:10:04.652 14002.069 - 14107.348: 53.7234% ( 148) 00:10:04.652 14107.348 - 14212.627: 55.1418% ( 128) 00:10:04.652 14212.627 - 14317.905: 56.7708% ( 147) 00:10:04.652 14317.905 - 14423.184: 58.3555% ( 143) 00:10:04.652 14423.184 - 14528.463: 60.1285% ( 160) 00:10:04.652 14528.463 - 14633.741: 61.1813% ( 95) 00:10:04.652 14633.741 - 14739.020: 62.1897% ( 91) 00:10:04.652 14739.020 - 14844.299: 63.8298% ( 148) 00:10:04.652 14844.299 - 14949.578: 65.3701% ( 139) 00:10:04.652 14949.578 - 15054.856: 66.3675% ( 90) 00:10:04.652 15054.856 - 15160.135: 67.2207% ( 77) 00:10:04.652 15160.135 - 15265.414: 68.1184% ( 81) 00:10:04.652 15265.414 - 15370.692: 68.7943% ( 61) 00:10:04.652 15370.692 - 15475.971: 69.3041% ( 46) 00:10:04.652 15475.971 - 15581.250: 70.0355% ( 66) 00:10:04.652 15581.250 - 15686.529: 70.8887% ( 77) 00:10:04.652 15686.529 - 15791.807: 71.9082% ( 92) 00:10:04.652 15791.807 - 15897.086: 73.1715% ( 114) 00:10:04.652 15897.086 - 16002.365: 74.3129% ( 103) 00:10:04.652 16002.365 - 16107.643: 75.3989% ( 98) 00:10:04.652 16107.643 - 16212.922: 76.3298% ( 84) 00:10:04.652 16212.922 - 16318.201: 76.9282% ( 54) 00:10:04.652 16318.201 - 16423.480: 77.6596% ( 66) 00:10:04.652 16423.480 - 16528.758: 78.2801% ( 56) 00:10:04.652 16528.758 - 16634.037: 78.8896% ( 55) 00:10:04.652 16634.037 - 16739.316: 79.5102% ( 56) 00:10:04.652 16739.316 - 16844.594: 80.0864% ( 52) 00:10:04.652 16844.594 - 16949.873: 80.7181% ( 57) 00:10:04.652 16949.873 - 17055.152: 81.3165% ( 54) 00:10:04.652 17055.152 - 17160.431: 82.1919% ( 79) 00:10:04.652 17160.431 - 17265.709: 83.1449% ( 86) 00:10:04.652 17265.709 - 17370.988: 83.8874% ( 67) 00:10:04.652 17370.988 - 17476.267: 84.6742% ( 71) 00:10:04.652 17476.267 - 17581.545: 85.3280% ( 59) 00:10:04.652 17581.545 - 17686.824: 86.1259% ( 72) 00:10:04.652 17686.824 - 17792.103: 86.9459% ( 74) 00:10:04.652 17792.103 - 17897.382: 87.6662% ( 65) 00:10:04.652 17897.382 - 18002.660: 88.3754% ( 64) 00:10:04.652 18002.660 - 18107.939: 89.1955% ( 74) 00:10:04.652 18107.939 - 18213.218: 89.9379% ( 67) 00:10:04.652 18213.218 - 18318.496: 90.5696% ( 57) 00:10:04.652 18318.496 - 18423.775: 91.1348% ( 51) 00:10:04.652 18423.775 - 18529.054: 91.5559% ( 38) 00:10:04.652 18529.054 - 18634.333: 91.8772% ( 29) 00:10:04.652 18634.333 - 18739.611: 92.4091% ( 48) 00:10:04.652 18739.611 - 18844.890: 93.1073% ( 63) 00:10:04.652 18844.890 - 18950.169: 93.6613% ( 50) 00:10:04.652 18950.169 - 19055.447: 94.0160% ( 32) 00:10:04.652 19055.447 - 19160.726: 94.3152% ( 27) 00:10:04.652 19160.726 - 19266.005: 94.6144% ( 27) 00:10:04.652 19266.005 - 19371.284: 94.8803% ( 24) 00:10:04.652 19371.284 - 19476.562: 95.1684% ( 26) 00:10:04.652 19476.562 - 19581.841: 95.4898% ( 29) 00:10:04.652 19581.841 - 19687.120: 95.7558% ( 24) 00:10:04.652 19687.120 - 19792.398: 95.9109% ( 14) 00:10:04.652 19792.398 - 19897.677: 96.3098% ( 36) 00:10:04.652 19897.677 - 20002.956: 96.5315% ( 20) 00:10:04.652 20002.956 - 20108.235: 96.6755% ( 13) 00:10:04.652 20108.235 - 20213.513: 96.7642% ( 8) 00:10:04.652 20213.513 - 20318.792: 96.8972% ( 12) 00:10:04.652 20318.792 - 20424.071: 97.0412% ( 13) 00:10:04.652 20424.071 - 20529.349: 97.1964% ( 14) 00:10:04.652 20529.349 - 20634.628: 97.3515% ( 14) 00:10:04.652 20634.628 - 20739.907: 97.4069% ( 5) 00:10:04.652 20739.907 - 20845.186: 97.4845% ( 7) 00:10:04.652 20845.186 - 20950.464: 97.5953% ( 10) 00:10:04.652 20950.464 - 21055.743: 97.6729% ( 7) 00:10:04.652 21055.743 - 21161.022: 97.7615% ( 8) 00:10:04.652 21161.022 - 21266.300: 97.8502% ( 8) 00:10:04.652 21266.300 - 21371.579: 97.9388% ( 8) 00:10:04.652 21371.579 - 21476.858: 98.0718% ( 12) 00:10:04.652 21476.858 - 21582.137: 98.2824% ( 19) 00:10:04.652 21582.137 - 21687.415: 98.3932% ( 10) 00:10:04.652 21687.415 - 21792.694: 98.4597% ( 6) 00:10:04.652 21792.694 - 21897.973: 98.5372% ( 7) 00:10:04.652 21897.973 - 22003.251: 98.5816% ( 4) 00:10:04.652 30109.712 - 30320.270: 98.7589% ( 16) 00:10:04.652 30320.270 - 30530.827: 98.8032% ( 4) 00:10:04.652 30530.827 - 30741.385: 98.8586% ( 5) 00:10:04.652 30741.385 - 30951.942: 98.9140% ( 5) 00:10:04.652 30951.942 - 31162.500: 98.9805% ( 6) 00:10:04.652 31162.500 - 31373.057: 99.0470% ( 6) 00:10:04.652 31373.057 - 31583.614: 99.1135% ( 6) 00:10:04.652 31583.614 - 31794.172: 99.1800% ( 6) 00:10:04.652 31794.172 - 32004.729: 99.2465% ( 6) 00:10:04.652 32004.729 - 32215.287: 99.2908% ( 4) 00:10:04.652 38742.567 - 38953.124: 99.3573% ( 6) 00:10:04.652 38953.124 - 39163.682: 99.4238% ( 6) 00:10:04.652 39163.682 - 39374.239: 99.4792% ( 5) 00:10:04.652 39374.239 - 39584.797: 99.5457% ( 6) 00:10:04.652 39584.797 - 39795.354: 99.6232% ( 7) 00:10:04.652 39795.354 - 40005.912: 99.6897% ( 6) 00:10:04.652 40005.912 - 40216.469: 99.7562% ( 6) 00:10:04.652 40216.469 - 40427.027: 99.8227% ( 6) 00:10:04.652 40427.027 - 40637.584: 99.9003% ( 7) 00:10:04.652 40637.584 - 40848.141: 99.9668% ( 6) 00:10:04.652 40848.141 - 41058.699: 100.0000% ( 3) 00:10:04.652 00:10:04.652 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:04.652 ============================================================================== 00:10:04.652 Range in us Cumulative IO count 00:10:04.652 8106.461 - 8159.100: 0.0111% ( 1) 00:10:04.652 8317.018 - 8369.658: 0.0222% ( 1) 00:10:04.652 8369.658 - 8422.297: 0.0997% ( 7) 00:10:04.652 8422.297 - 8474.937: 0.1884% ( 8) 00:10:04.652 8474.937 - 8527.576: 0.3214% ( 12) 00:10:04.652 8527.576 - 8580.215: 0.5098% ( 17) 00:10:04.652 8580.215 - 8632.855: 0.5762% ( 6) 00:10:04.652 8632.855 - 8685.494: 0.6427% ( 6) 00:10:04.652 8685.494 - 8738.133: 0.6871% ( 4) 00:10:04.652 8738.133 - 8790.773: 0.7092% ( 2) 00:10:04.652 8790.773 - 8843.412: 0.7203% ( 1) 00:10:04.652 8896.051 - 8948.691: 0.7425% ( 2) 00:10:04.652 8948.691 - 9001.330: 0.8200% ( 7) 00:10:04.652 9001.330 - 9053.969: 0.9419% ( 11) 00:10:04.652 9053.969 - 9106.609: 1.0971% ( 14) 00:10:04.652 9106.609 - 9159.248: 1.4074% ( 28) 00:10:04.652 9159.248 - 9211.888: 1.6401% ( 21) 00:10:04.652 9211.888 - 9264.527: 2.0833% ( 40) 00:10:04.652 9264.527 - 9317.166: 2.4601% ( 34) 00:10:04.652 9317.166 - 9369.806: 2.8701% ( 37) 00:10:04.652 9369.806 - 9422.445: 3.4131% ( 49) 00:10:04.652 9422.445 - 9475.084: 3.9007% ( 44) 00:10:04.652 9475.084 - 9527.724: 4.2996% ( 36) 00:10:04.652 9527.724 - 9580.363: 4.8205% ( 47) 00:10:04.652 9580.363 - 9633.002: 5.2305% ( 37) 00:10:04.652 9633.002 - 9685.642: 5.6959% ( 42) 00:10:04.652 9685.642 - 9738.281: 6.2611% ( 51) 00:10:04.652 9738.281 - 9790.920: 6.6268% ( 33) 00:10:04.652 9790.920 - 9843.560: 7.2917% ( 60) 00:10:04.652 9843.560 - 9896.199: 7.7017% ( 37) 00:10:04.652 9896.199 - 9948.839: 8.0895% ( 35) 00:10:04.652 9948.839 - 10001.478: 8.6215% ( 48) 00:10:04.652 10001.478 - 10054.117: 9.3639% ( 67) 00:10:04.652 10054.117 - 10106.757: 10.2172% ( 77) 00:10:04.652 10106.757 - 10159.396: 11.3143% ( 99) 00:10:04.652 10159.396 - 10212.035: 12.6330% ( 119) 00:10:04.652 10212.035 - 10264.675: 13.9184% ( 116) 00:10:04.652 10264.675 - 10317.314: 14.9712% ( 95) 00:10:04.652 10317.314 - 10369.953: 15.7137% ( 67) 00:10:04.652 10369.953 - 10422.593: 16.3675% ( 59) 00:10:04.652 10422.593 - 10475.232: 17.0988% ( 66) 00:10:04.652 10475.232 - 10527.871: 17.7970% ( 63) 00:10:04.653 10527.871 - 10580.511: 18.1184% ( 29) 00:10:04.653 10580.511 - 10633.150: 18.4840% ( 33) 00:10:04.653 10633.150 - 10685.790: 18.9384% ( 41) 00:10:04.653 10685.790 - 10738.429: 19.2598% ( 29) 00:10:04.653 10738.429 - 10791.068: 19.6254% ( 33) 00:10:04.653 10791.068 - 10843.708: 20.1463% ( 47) 00:10:04.653 10843.708 - 10896.347: 20.8887% ( 67) 00:10:04.653 10896.347 - 10948.986: 21.8972% ( 91) 00:10:04.653 10948.986 - 11001.626: 22.6840% ( 71) 00:10:04.653 11001.626 - 11054.265: 23.1051% ( 38) 00:10:04.653 11054.265 - 11106.904: 23.7367% ( 57) 00:10:04.653 11106.904 - 11159.544: 24.2575% ( 47) 00:10:04.653 11159.544 - 11212.183: 24.8005% ( 49) 00:10:04.653 11212.183 - 11264.822: 25.6316% ( 75) 00:10:04.653 11264.822 - 11317.462: 26.0638% ( 39) 00:10:04.653 11317.462 - 11370.101: 26.5847% ( 47) 00:10:04.653 11370.101 - 11422.741: 27.2052% ( 56) 00:10:04.653 11422.741 - 11475.380: 27.7926% ( 53) 00:10:04.653 11475.380 - 11528.019: 28.4353% ( 58) 00:10:04.653 11528.019 - 11580.659: 28.8453% ( 37) 00:10:04.653 11580.659 - 11633.298: 29.2553% ( 37) 00:10:04.653 11633.298 - 11685.937: 29.7207% ( 42) 00:10:04.653 11685.937 - 11738.577: 30.1973% ( 43) 00:10:04.653 11738.577 - 11791.216: 30.7070% ( 46) 00:10:04.653 11791.216 - 11843.855: 31.3276% ( 56) 00:10:04.653 11843.855 - 11896.495: 31.7265% ( 36) 00:10:04.653 11896.495 - 11949.134: 32.0146% ( 26) 00:10:04.653 11949.134 - 12001.773: 32.3803% ( 33) 00:10:04.653 12001.773 - 12054.413: 33.0230% ( 58) 00:10:04.653 12054.413 - 12107.052: 33.5217% ( 45) 00:10:04.653 12107.052 - 12159.692: 33.9317% ( 37) 00:10:04.653 12159.692 - 12212.331: 34.2863% ( 32) 00:10:04.653 12212.331 - 12264.970: 34.5080% ( 20) 00:10:04.653 12264.970 - 12317.610: 34.7629% ( 23) 00:10:04.653 12317.610 - 12370.249: 35.0510% ( 26) 00:10:04.653 12370.249 - 12422.888: 35.3502% ( 27) 00:10:04.653 12422.888 - 12475.528: 35.6272% ( 25) 00:10:04.653 12475.528 - 12528.167: 35.9486% ( 29) 00:10:04.653 12528.167 - 12580.806: 36.4805% ( 48) 00:10:04.653 12580.806 - 12633.446: 36.7908% ( 28) 00:10:04.653 12633.446 - 12686.085: 37.2784% ( 44) 00:10:04.653 12686.085 - 12738.724: 37.6551% ( 34) 00:10:04.653 12738.724 - 12791.364: 38.0762% ( 38) 00:10:04.653 12791.364 - 12844.003: 38.5527% ( 43) 00:10:04.653 12844.003 - 12896.643: 39.0293% ( 43) 00:10:04.653 12896.643 - 12949.282: 39.4393% ( 37) 00:10:04.653 12949.282 - 13001.921: 39.9823% ( 49) 00:10:04.653 13001.921 - 13054.561: 40.4699% ( 44) 00:10:04.653 13054.561 - 13107.200: 40.9464% ( 43) 00:10:04.653 13107.200 - 13159.839: 41.3453% ( 36) 00:10:04.653 13159.839 - 13212.479: 41.8218% ( 43) 00:10:04.653 13212.479 - 13265.118: 42.3316% ( 46) 00:10:04.653 13265.118 - 13317.757: 42.9189% ( 53) 00:10:04.653 13317.757 - 13370.397: 43.5284% ( 55) 00:10:04.653 13370.397 - 13423.036: 44.0714% ( 49) 00:10:04.653 13423.036 - 13475.676: 44.7363% ( 60) 00:10:04.653 13475.676 - 13580.954: 46.1436% ( 127) 00:10:04.653 13580.954 - 13686.233: 47.6175% ( 133) 00:10:04.653 13686.233 - 13791.512: 49.4127% ( 162) 00:10:04.653 13791.512 - 13896.790: 50.6427% ( 111) 00:10:04.653 13896.790 - 14002.069: 52.3493% ( 154) 00:10:04.653 14002.069 - 14107.348: 53.7123% ( 123) 00:10:04.653 14107.348 - 14212.627: 55.2748% ( 141) 00:10:04.653 14212.627 - 14317.905: 56.5935% ( 119) 00:10:04.653 14317.905 - 14423.184: 57.9676% ( 124) 00:10:04.653 14423.184 - 14528.463: 58.9539% ( 89) 00:10:04.653 14528.463 - 14633.741: 60.0621% ( 100) 00:10:04.653 14633.741 - 14739.020: 61.2589% ( 108) 00:10:04.653 14739.020 - 14844.299: 62.4889% ( 111) 00:10:04.653 14844.299 - 14949.578: 63.8298% ( 121) 00:10:04.653 14949.578 - 15054.856: 65.5363% ( 154) 00:10:04.653 15054.856 - 15160.135: 66.7332% ( 108) 00:10:04.653 15160.135 - 15265.414: 67.8191% ( 98) 00:10:04.653 15265.414 - 15370.692: 68.8165% ( 90) 00:10:04.653 15370.692 - 15475.971: 69.6698% ( 77) 00:10:04.653 15475.971 - 15581.250: 70.4898% ( 74) 00:10:04.653 15581.250 - 15686.529: 71.2766% ( 71) 00:10:04.653 15686.529 - 15791.807: 72.2850% ( 91) 00:10:04.653 15791.807 - 15897.086: 73.0718% ( 71) 00:10:04.653 15897.086 - 16002.365: 73.8808% ( 73) 00:10:04.653 16002.365 - 16107.643: 74.7119% ( 75) 00:10:04.653 16107.643 - 16212.922: 75.5873% ( 79) 00:10:04.653 16212.922 - 16318.201: 76.5182% ( 84) 00:10:04.653 16318.201 - 16423.480: 77.3050% ( 71) 00:10:04.653 16423.480 - 16528.758: 78.1693% ( 78) 00:10:04.653 16528.758 - 16634.037: 79.0337% ( 78) 00:10:04.653 16634.037 - 16739.316: 79.8426% ( 73) 00:10:04.653 16739.316 - 16844.594: 80.4965% ( 59) 00:10:04.653 16844.594 - 16949.873: 81.1170% ( 56) 00:10:04.653 16949.873 - 17055.152: 81.5381% ( 38) 00:10:04.653 17055.152 - 17160.431: 81.8927% ( 32) 00:10:04.653 17160.431 - 17265.709: 82.3582% ( 42) 00:10:04.653 17265.709 - 17370.988: 82.8568% ( 45) 00:10:04.653 17370.988 - 17476.267: 83.5660% ( 64) 00:10:04.653 17476.267 - 17581.545: 84.1977% ( 57) 00:10:04.653 17581.545 - 17686.824: 84.9512% ( 68) 00:10:04.653 17686.824 - 17792.103: 85.9375% ( 89) 00:10:04.653 17792.103 - 17897.382: 86.7243% ( 71) 00:10:04.653 17897.382 - 18002.660: 87.6108% ( 80) 00:10:04.653 18002.660 - 18107.939: 88.1760% ( 51) 00:10:04.653 18107.939 - 18213.218: 88.8963% ( 65) 00:10:04.653 18213.218 - 18318.496: 90.0044% ( 100) 00:10:04.653 18318.496 - 18423.775: 90.9242% ( 83) 00:10:04.653 18423.775 - 18529.054: 91.4561% ( 48) 00:10:04.653 18529.054 - 18634.333: 92.0213% ( 51) 00:10:04.653 18634.333 - 18739.611: 92.3870% ( 33) 00:10:04.653 18739.611 - 18844.890: 92.8081% ( 38) 00:10:04.653 18844.890 - 18950.169: 93.2070% ( 36) 00:10:04.653 18950.169 - 19055.447: 93.6170% ( 37) 00:10:04.653 19055.447 - 19160.726: 94.1157% ( 45) 00:10:04.653 19160.726 - 19266.005: 94.5590% ( 40) 00:10:04.653 19266.005 - 19371.284: 94.9136% ( 32) 00:10:04.653 19371.284 - 19476.562: 95.2460% ( 30) 00:10:04.653 19476.562 - 19581.841: 95.6006% ( 32) 00:10:04.653 19581.841 - 19687.120: 96.0439% ( 40) 00:10:04.653 19687.120 - 19792.398: 96.3874% ( 31) 00:10:04.653 19792.398 - 19897.677: 96.6201% ( 21) 00:10:04.653 19897.677 - 20002.956: 96.8418% ( 20) 00:10:04.653 20002.956 - 20108.235: 97.1188% ( 25) 00:10:04.653 20108.235 - 20213.513: 97.3848% ( 24) 00:10:04.653 20213.513 - 20318.792: 97.5953% ( 19) 00:10:04.653 20318.792 - 20424.071: 97.7283% ( 12) 00:10:04.653 20424.071 - 20529.349: 97.7837% ( 5) 00:10:04.653 20529.349 - 20634.628: 97.8280% ( 4) 00:10:04.653 20634.628 - 20739.907: 97.8502% ( 2) 00:10:04.653 20739.907 - 20845.186: 97.8723% ( 2) 00:10:04.653 20845.186 - 20950.464: 97.8834% ( 1) 00:10:04.653 20950.464 - 21055.743: 97.9056% ( 2) 00:10:04.653 21055.743 - 21161.022: 97.9721% ( 6) 00:10:04.653 21161.022 - 21266.300: 98.0275% ( 5) 00:10:04.653 21266.300 - 21371.579: 98.0829% ( 5) 00:10:04.653 21371.579 - 21476.858: 98.1383% ( 5) 00:10:04.653 21476.858 - 21582.137: 98.2048% ( 6) 00:10:04.653 21582.137 - 21687.415: 98.2491% ( 4) 00:10:04.653 21687.415 - 21792.694: 98.2824% ( 3) 00:10:04.653 21792.694 - 21897.973: 98.3932% ( 10) 00:10:04.653 21897.973 - 22003.251: 98.4264% ( 3) 00:10:04.653 22003.251 - 22108.530: 98.4597% ( 3) 00:10:04.653 22108.530 - 22213.809: 98.4818% ( 2) 00:10:04.653 22213.809 - 22319.088: 98.5151% ( 3) 00:10:04.653 22319.088 - 22424.366: 98.5372% ( 2) 00:10:04.653 22424.366 - 22529.645: 98.5705% ( 3) 00:10:04.653 22529.645 - 22634.924: 98.5816% ( 1) 00:10:04.653 28846.368 - 29056.925: 98.7256% ( 13) 00:10:04.653 29056.925 - 29267.483: 98.8697% ( 13) 00:10:04.653 29267.483 - 29478.040: 98.9805% ( 10) 00:10:04.653 29478.040 - 29688.598: 99.0248% ( 4) 00:10:04.653 29688.598 - 29899.155: 99.0802% ( 5) 00:10:04.653 29899.155 - 30109.712: 99.1356% ( 5) 00:10:04.653 30109.712 - 30320.270: 99.1910% ( 5) 00:10:04.653 30320.270 - 30530.827: 99.2354% ( 4) 00:10:04.653 30530.827 - 30741.385: 99.2908% ( 5) 00:10:04.653 36005.320 - 36215.878: 99.3351% ( 4) 00:10:04.653 36215.878 - 36426.435: 99.3794% ( 4) 00:10:04.653 37268.665 - 37479.222: 99.3905% ( 1) 00:10:04.654 37479.222 - 37689.780: 99.4459% ( 5) 00:10:04.654 37689.780 - 37900.337: 99.5013% ( 5) 00:10:04.654 37900.337 - 38110.895: 99.5567% ( 5) 00:10:04.654 38110.895 - 38321.452: 99.6232% ( 6) 00:10:04.654 38321.452 - 38532.010: 99.6786% ( 5) 00:10:04.654 38532.010 - 38742.567: 99.7562% ( 7) 00:10:04.654 38742.567 - 38953.124: 99.8227% ( 6) 00:10:04.654 38953.124 - 39163.682: 99.8892% ( 6) 00:10:04.654 39163.682 - 39374.239: 99.9557% ( 6) 00:10:04.654 39374.239 - 39584.797: 100.0000% ( 4) 00:10:04.654 00:10:04.654 12:03:52 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:04.654 00:10:04.654 real 0m2.640s 00:10:04.654 user 0m2.245s 00:10:04.654 sys 0m0.282s 00:10:04.654 12:03:52 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.654 12:03:52 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:04.654 ************************************ 00:10:04.654 END TEST nvme_perf 00:10:04.654 ************************************ 00:10:04.654 12:03:52 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:04.654 12:03:52 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:04.654 12:03:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.654 12:03:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:04.654 ************************************ 00:10:04.654 START TEST nvme_hello_world 00:10:04.654 ************************************ 00:10:04.654 12:03:52 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:04.654 Initializing NVMe Controllers 00:10:04.654 Attached to 0000:00:10.0 00:10:04.654 Namespace ID: 1 size: 6GB 00:10:04.654 Attached to 0000:00:11.0 00:10:04.654 Namespace ID: 1 size: 5GB 00:10:04.654 Attached to 0000:00:13.0 00:10:04.654 Namespace ID: 1 size: 1GB 00:10:04.654 Attached to 0000:00:12.0 00:10:04.654 Namespace ID: 1 size: 4GB 00:10:04.654 Namespace ID: 2 size: 4GB 00:10:04.654 Namespace ID: 3 size: 4GB 00:10:04.654 Initialization complete. 00:10:04.654 INFO: using host memory buffer for IO 00:10:04.654 Hello world! 00:10:04.654 INFO: using host memory buffer for IO 00:10:04.654 Hello world! 00:10:04.654 INFO: using host memory buffer for IO 00:10:04.654 Hello world! 00:10:04.654 INFO: using host memory buffer for IO 00:10:04.654 Hello world! 00:10:04.654 INFO: using host memory buffer for IO 00:10:04.654 Hello world! 00:10:04.654 INFO: using host memory buffer for IO 00:10:04.654 Hello world! 00:10:04.912 00:10:04.912 real 0m0.285s 00:10:04.912 user 0m0.102s 00:10:04.912 sys 0m0.141s 00:10:04.912 12:03:52 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.912 12:03:52 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:04.912 ************************************ 00:10:04.912 END TEST nvme_hello_world 00:10:04.912 ************************************ 00:10:04.912 12:03:52 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:04.912 12:03:52 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:04.912 12:03:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.912 12:03:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:04.912 ************************************ 00:10:04.912 START TEST nvme_sgl 00:10:04.912 ************************************ 00:10:04.912 12:03:52 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:05.170 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:05.170 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:05.170 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:05.170 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:05.170 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:05.170 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:05.170 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:05.170 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:05.170 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:05.170 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:05.170 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:05.170 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:05.170 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:05.170 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:05.170 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:05.170 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:05.170 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:05.170 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:05.170 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:05.170 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:05.170 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:05.170 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:05.171 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:05.171 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:05.171 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:05.171 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:05.171 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:05.171 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:05.171 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:05.171 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:05.171 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:05.171 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:05.171 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:05.171 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:05.171 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:05.171 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:05.171 NVMe Readv/Writev Request test 00:10:05.171 Attached to 0000:00:10.0 00:10:05.171 Attached to 0000:00:11.0 00:10:05.171 Attached to 0000:00:13.0 00:10:05.171 Attached to 0000:00:12.0 00:10:05.171 0000:00:10.0: build_io_request_2 test passed 00:10:05.171 0000:00:10.0: build_io_request_4 test passed 00:10:05.171 0000:00:10.0: build_io_request_5 test passed 00:10:05.171 0000:00:10.0: build_io_request_6 test passed 00:10:05.171 0000:00:10.0: build_io_request_7 test passed 00:10:05.171 0000:00:10.0: build_io_request_10 test passed 00:10:05.171 0000:00:11.0: build_io_request_2 test passed 00:10:05.171 0000:00:11.0: build_io_request_4 test passed 00:10:05.171 0000:00:11.0: build_io_request_5 test passed 00:10:05.171 0000:00:11.0: build_io_request_6 test passed 00:10:05.171 0000:00:11.0: build_io_request_7 test passed 00:10:05.171 0000:00:11.0: build_io_request_10 test passed 00:10:05.171 Cleaning up... 00:10:05.171 00:10:05.171 real 0m0.358s 00:10:05.171 user 0m0.170s 00:10:05.171 sys 0m0.141s 00:10:05.171 12:03:53 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.171 12:03:53 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:05.171 ************************************ 00:10:05.171 END TEST nvme_sgl 00:10:05.171 ************************************ 00:10:05.443 12:03:53 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:05.443 12:03:53 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:05.443 12:03:53 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.443 12:03:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.443 ************************************ 00:10:05.443 START TEST nvme_e2edp 00:10:05.443 ************************************ 00:10:05.443 12:03:53 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:05.751 NVMe Write/Read with End-to-End data protection test 00:10:05.751 Attached to 0000:00:10.0 00:10:05.751 Attached to 0000:00:11.0 00:10:05.751 Attached to 0000:00:13.0 00:10:05.751 Attached to 0000:00:12.0 00:10:05.751 Cleaning up... 00:10:05.751 00:10:05.751 real 0m0.303s 00:10:05.751 user 0m0.102s 00:10:05.751 sys 0m0.157s 00:10:05.751 ************************************ 00:10:05.751 END TEST nvme_e2edp 00:10:05.751 ************************************ 00:10:05.751 12:03:53 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.751 12:03:53 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:05.752 12:03:53 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:05.752 12:03:53 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:05.752 12:03:53 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.752 12:03:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.752 ************************************ 00:10:05.752 START TEST nvme_reserve 00:10:05.752 ************************************ 00:10:05.752 12:03:53 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:06.027 ===================================================== 00:10:06.027 NVMe Controller at PCI bus 0, device 16, function 0 00:10:06.027 ===================================================== 00:10:06.027 Reservations: Not Supported 00:10:06.027 ===================================================== 00:10:06.027 NVMe Controller at PCI bus 0, device 17, function 0 00:10:06.027 ===================================================== 00:10:06.027 Reservations: Not Supported 00:10:06.027 ===================================================== 00:10:06.027 NVMe Controller at PCI bus 0, device 19, function 0 00:10:06.028 ===================================================== 00:10:06.028 Reservations: Not Supported 00:10:06.028 ===================================================== 00:10:06.028 NVMe Controller at PCI bus 0, device 18, function 0 00:10:06.028 ===================================================== 00:10:06.028 Reservations: Not Supported 00:10:06.028 Reservation test passed 00:10:06.028 00:10:06.028 real 0m0.295s 00:10:06.028 user 0m0.110s 00:10:06.028 sys 0m0.142s 00:10:06.028 ************************************ 00:10:06.028 END TEST nvme_reserve 00:10:06.028 ************************************ 00:10:06.028 12:03:53 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.028 12:03:53 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:06.028 12:03:53 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:06.028 12:03:53 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:06.028 12:03:53 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.028 12:03:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:06.028 ************************************ 00:10:06.028 START TEST nvme_err_injection 00:10:06.028 ************************************ 00:10:06.028 12:03:53 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:06.286 NVMe Error Injection test 00:10:06.286 Attached to 0000:00:10.0 00:10:06.286 Attached to 0000:00:11.0 00:10:06.286 Attached to 0000:00:13.0 00:10:06.286 Attached to 0000:00:12.0 00:10:06.286 0000:00:12.0: get features failed as expected 00:10:06.286 0000:00:10.0: get features failed as expected 00:10:06.286 0000:00:11.0: get features failed as expected 00:10:06.286 0000:00:13.0: get features failed as expected 00:10:06.286 0000:00:10.0: get features successfully as expected 00:10:06.286 0000:00:11.0: get features successfully as expected 00:10:06.286 0000:00:13.0: get features successfully as expected 00:10:06.286 0000:00:12.0: get features successfully as expected 00:10:06.286 0000:00:10.0: read failed as expected 00:10:06.286 0000:00:11.0: read failed as expected 00:10:06.286 0000:00:13.0: read failed as expected 00:10:06.286 0000:00:12.0: read failed as expected 00:10:06.286 0000:00:10.0: read successfully as expected 00:10:06.286 0000:00:11.0: read successfully as expected 00:10:06.286 0000:00:13.0: read successfully as expected 00:10:06.286 0000:00:12.0: read successfully as expected 00:10:06.286 Cleaning up... 00:10:06.286 00:10:06.286 real 0m0.353s 00:10:06.286 user 0m0.122s 00:10:06.286 sys 0m0.180s 00:10:06.286 ************************************ 00:10:06.286 END TEST nvme_err_injection 00:10:06.286 ************************************ 00:10:06.286 12:03:54 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.286 12:03:54 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:06.544 12:03:54 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:06.544 12:03:54 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:10:06.544 12:03:54 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.544 12:03:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:06.544 ************************************ 00:10:06.544 START TEST nvme_overhead 00:10:06.544 ************************************ 00:10:06.544 12:03:54 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:07.920 Initializing NVMe Controllers 00:10:07.920 Attached to 0000:00:10.0 00:10:07.920 Attached to 0000:00:11.0 00:10:07.920 Attached to 0000:00:13.0 00:10:07.920 Attached to 0000:00:12.0 00:10:07.920 Initialization complete. Launching workers. 00:10:07.920 submit (in ns) avg, min, max = 13843.8, 11201.6, 116411.2 00:10:07.920 complete (in ns) avg, min, max = 8852.1, 7756.6, 137453.0 00:10:07.920 00:10:07.920 Submit histogram 00:10:07.920 ================ 00:10:07.920 Range in us Cumulative Count 00:10:07.920 11.155 - 11.206: 0.0115% ( 1) 00:10:07.920 11.309 - 11.361: 0.0229% ( 1) 00:10:07.920 11.412 - 11.463: 0.0344% ( 1) 00:10:07.920 11.566 - 11.618: 0.0573% ( 2) 00:10:07.920 11.875 - 11.926: 0.0687% ( 1) 00:10:07.920 11.926 - 11.978: 0.0802% ( 1) 00:10:07.920 12.080 - 12.132: 0.0917% ( 1) 00:10:07.920 12.132 - 12.183: 0.1031% ( 1) 00:10:07.920 12.286 - 12.337: 0.1146% ( 1) 00:10:07.920 12.337 - 12.389: 0.2062% ( 8) 00:10:07.920 12.389 - 12.440: 0.4125% ( 18) 00:10:07.920 12.440 - 12.492: 0.7447% ( 29) 00:10:07.920 12.492 - 12.543: 1.3520% ( 53) 00:10:07.920 12.543 - 12.594: 2.4748% ( 98) 00:10:07.920 12.594 - 12.646: 4.3309% ( 162) 00:10:07.920 12.646 - 12.697: 6.6453% ( 202) 00:10:07.920 12.697 - 12.749: 10.2544% ( 315) 00:10:07.920 12.749 - 12.800: 13.6916% ( 300) 00:10:07.920 12.800 - 12.851: 17.8735% ( 365) 00:10:07.920 12.851 - 12.903: 21.7003% ( 334) 00:10:07.920 12.903 - 12.954: 25.4583% ( 328) 00:10:07.920 12.954 - 13.006: 29.1934% ( 326) 00:10:07.920 13.006 - 13.057: 32.7910% ( 314) 00:10:07.920 13.057 - 13.108: 36.4574% ( 320) 00:10:07.920 13.108 - 13.160: 39.8373% ( 295) 00:10:07.920 13.160 - 13.263: 46.4482% ( 577) 00:10:07.920 13.263 - 13.365: 53.1737% ( 587) 00:10:07.920 13.365 - 13.468: 60.5866% ( 647) 00:10:07.920 13.468 - 13.571: 67.7933% ( 629) 00:10:07.920 13.571 - 13.674: 73.4303% ( 492) 00:10:07.920 13.674 - 13.777: 78.4143% ( 435) 00:10:07.920 13.777 - 13.880: 81.8630% ( 301) 00:10:07.920 13.880 - 13.982: 84.6471% ( 243) 00:10:07.920 13.982 - 14.085: 86.4688% ( 159) 00:10:07.920 14.085 - 14.188: 88.2333% ( 154) 00:10:07.920 14.188 - 14.291: 89.9633% ( 151) 00:10:07.920 14.291 - 14.394: 90.9716% ( 88) 00:10:07.920 14.394 - 14.496: 91.6705% ( 61) 00:10:07.920 14.496 - 14.599: 92.2204% ( 48) 00:10:07.920 14.599 - 14.702: 92.6214% ( 35) 00:10:07.920 14.702 - 14.805: 92.8048% ( 16) 00:10:07.920 14.805 - 14.908: 92.9079% ( 9) 00:10:07.920 14.908 - 15.010: 93.0110% ( 9) 00:10:07.920 15.010 - 15.113: 93.1256% ( 10) 00:10:07.920 15.113 - 15.216: 93.2058% ( 7) 00:10:07.920 15.216 - 15.319: 93.2745% ( 6) 00:10:07.920 15.319 - 15.422: 93.3203% ( 4) 00:10:07.920 15.422 - 15.524: 93.3318% ( 1) 00:10:07.920 15.524 - 15.627: 93.3547% ( 2) 00:10:07.920 15.627 - 15.730: 93.3891% ( 3) 00:10:07.920 15.730 - 15.833: 93.4235% ( 3) 00:10:07.920 15.936 - 16.039: 93.4693% ( 4) 00:10:07.920 16.039 - 16.141: 93.5495% ( 7) 00:10:07.920 16.141 - 16.244: 93.6182% ( 6) 00:10:07.920 16.244 - 16.347: 93.6641% ( 4) 00:10:07.920 16.347 - 16.450: 93.7328% ( 6) 00:10:07.920 16.450 - 16.553: 93.8359% ( 9) 00:10:07.920 16.553 - 16.655: 93.9620% ( 11) 00:10:07.920 16.655 - 16.758: 94.0422% ( 7) 00:10:07.920 16.758 - 16.861: 94.1682% ( 11) 00:10:07.920 16.861 - 16.964: 94.2713% ( 9) 00:10:07.920 16.964 - 17.067: 94.3859% ( 10) 00:10:07.920 17.067 - 17.169: 94.4890% ( 9) 00:10:07.920 17.169 - 17.272: 94.5692% ( 7) 00:10:07.920 17.272 - 17.375: 94.6952% ( 11) 00:10:07.920 17.375 - 17.478: 94.7984% ( 9) 00:10:07.920 17.478 - 17.581: 94.8900% ( 8) 00:10:07.920 17.581 - 17.684: 94.9817% ( 8) 00:10:07.920 17.684 - 17.786: 95.0390% ( 5) 00:10:07.920 17.786 - 17.889: 95.1306% ( 8) 00:10:07.920 17.889 - 17.992: 95.1994% ( 6) 00:10:07.920 17.992 - 18.095: 95.2681% ( 6) 00:10:07.920 18.095 - 18.198: 95.3941% ( 11) 00:10:07.920 18.198 - 18.300: 95.5316% ( 12) 00:10:07.920 18.300 - 18.403: 95.7264% ( 17) 00:10:07.920 18.403 - 18.506: 95.7951% ( 6) 00:10:07.920 18.506 - 18.609: 95.9785% ( 16) 00:10:07.920 18.609 - 18.712: 96.1159% ( 12) 00:10:07.920 18.712 - 18.814: 96.2420% ( 11) 00:10:07.920 18.814 - 18.917: 96.3680% ( 11) 00:10:07.920 18.917 - 19.020: 96.5055% ( 12) 00:10:07.920 19.020 - 19.123: 96.6544% ( 13) 00:10:07.920 19.123 - 19.226: 96.7346% ( 7) 00:10:07.920 19.226 - 19.329: 96.8492% ( 10) 00:10:07.920 19.329 - 19.431: 96.9982% ( 13) 00:10:07.921 19.431 - 19.534: 97.1471% ( 13) 00:10:07.921 19.534 - 19.637: 97.2388% ( 8) 00:10:07.921 19.637 - 19.740: 97.2961% ( 5) 00:10:07.921 19.740 - 19.843: 97.4565% ( 14) 00:10:07.921 19.843 - 19.945: 97.5710% ( 10) 00:10:07.921 19.945 - 20.048: 97.6169% ( 4) 00:10:07.921 20.048 - 20.151: 97.6856% ( 6) 00:10:07.921 20.151 - 20.254: 97.7200% ( 3) 00:10:07.921 20.254 - 20.357: 97.7658% ( 4) 00:10:07.921 20.357 - 20.459: 97.8689% ( 9) 00:10:07.921 20.459 - 20.562: 97.9033% ( 3) 00:10:07.921 20.562 - 20.665: 97.9377% ( 3) 00:10:07.921 20.665 - 20.768: 97.9491% ( 1) 00:10:07.921 20.768 - 20.871: 97.9720% ( 2) 00:10:07.921 20.871 - 20.973: 98.0064% ( 3) 00:10:07.921 20.973 - 21.076: 98.0637% ( 5) 00:10:07.921 21.076 - 21.179: 98.1095% ( 4) 00:10:07.921 21.385 - 21.488: 98.1210% ( 1) 00:10:07.921 21.488 - 21.590: 98.1668% ( 4) 00:10:07.921 21.693 - 21.796: 98.2012% ( 3) 00:10:07.921 22.207 - 22.310: 98.2356% ( 3) 00:10:07.921 22.310 - 22.413: 98.2470% ( 1) 00:10:07.921 22.413 - 22.516: 98.2585% ( 1) 00:10:07.921 22.516 - 22.618: 98.2814% ( 2) 00:10:07.921 22.824 - 22.927: 98.2929% ( 1) 00:10:07.921 22.927 - 23.030: 98.3158% ( 2) 00:10:07.921 23.235 - 23.338: 98.3272% ( 1) 00:10:07.921 23.338 - 23.441: 98.3501% ( 2) 00:10:07.921 23.544 - 23.647: 98.3616% ( 1) 00:10:07.921 23.749 - 23.852: 98.3731% ( 1) 00:10:07.921 23.955 - 24.058: 98.4074% ( 3) 00:10:07.921 24.058 - 24.161: 98.4303% ( 2) 00:10:07.921 24.161 - 24.263: 98.4762% ( 4) 00:10:07.921 24.263 - 24.366: 98.4991% ( 2) 00:10:07.921 24.366 - 24.469: 98.5335% ( 3) 00:10:07.921 24.469 - 24.572: 98.5449% ( 1) 00:10:07.921 24.572 - 24.675: 98.5793% ( 3) 00:10:07.921 24.880 - 24.983: 98.5907% ( 1) 00:10:07.921 24.983 - 25.086: 98.6022% ( 1) 00:10:07.921 25.086 - 25.189: 98.6251% ( 2) 00:10:07.921 25.189 - 25.292: 98.6595% ( 3) 00:10:07.921 25.292 - 25.394: 98.6824% ( 2) 00:10:07.921 25.394 - 25.497: 98.7511% ( 6) 00:10:07.921 25.497 - 25.600: 98.8084% ( 5) 00:10:07.921 25.600 - 25.703: 98.9115% ( 9) 00:10:07.921 25.703 - 25.806: 98.9574% ( 4) 00:10:07.921 25.806 - 25.908: 99.0261% ( 6) 00:10:07.921 25.908 - 26.011: 99.1292% ( 9) 00:10:07.921 26.114 - 26.217: 99.1751% ( 4) 00:10:07.921 26.217 - 26.320: 99.1980% ( 2) 00:10:07.921 26.320 - 26.525: 99.2094% ( 1) 00:10:07.921 26.525 - 26.731: 99.2324% ( 2) 00:10:07.921 26.731 - 26.937: 99.2438% ( 1) 00:10:07.921 27.142 - 27.348: 99.2782% ( 3) 00:10:07.921 27.553 - 27.759: 99.2896% ( 1) 00:10:07.921 28.582 - 28.787: 99.3011% ( 1) 00:10:07.921 28.787 - 28.993: 99.3240% ( 2) 00:10:07.921 28.993 - 29.198: 99.3355% ( 1) 00:10:07.921 29.198 - 29.404: 99.3813% ( 4) 00:10:07.921 29.404 - 29.610: 99.4730% ( 8) 00:10:07.921 29.610 - 29.815: 99.5646% ( 8) 00:10:07.921 29.815 - 30.021: 99.6792% ( 10) 00:10:07.921 30.021 - 30.227: 99.7365% ( 5) 00:10:07.921 30.227 - 30.432: 99.7709% ( 3) 00:10:07.921 30.843 - 31.049: 99.7823% ( 1) 00:10:07.921 31.049 - 31.255: 99.8167% ( 3) 00:10:07.921 31.460 - 31.666: 99.8281% ( 1) 00:10:07.921 31.871 - 32.077: 99.8396% ( 1) 00:10:07.921 33.105 - 33.311: 99.8511% ( 1) 00:10:07.921 33.928 - 34.133: 99.8625% ( 1) 00:10:07.921 35.778 - 35.984: 99.8854% ( 2) 00:10:07.921 36.190 - 36.395: 99.8969% ( 1) 00:10:07.921 36.806 - 37.012: 99.9083% ( 1) 00:10:07.921 41.330 - 41.536: 99.9198% ( 1) 00:10:07.921 41.536 - 41.741: 99.9313% ( 1) 00:10:07.921 49.966 - 50.172: 99.9427% ( 1) 00:10:07.921 54.284 - 54.696: 99.9542% ( 1) 00:10:07.921 75.258 - 75.669: 99.9656% ( 1) 00:10:07.921 78.137 - 78.548: 99.9771% ( 1) 00:10:07.921 95.409 - 95.820: 99.9885% ( 1) 00:10:07.921 115.971 - 116.794: 100.0000% ( 1) 00:10:07.921 00:10:07.921 Complete histogram 00:10:07.921 ================== 00:10:07.921 Range in us Cumulative Count 00:10:07.921 7.711 - 7.762: 0.0115% ( 1) 00:10:07.921 7.762 - 7.814: 1.0426% ( 90) 00:10:07.921 7.814 - 7.865: 7.8483% ( 594) 00:10:07.921 7.865 - 7.916: 18.1943% ( 903) 00:10:07.921 7.916 - 7.968: 26.0426% ( 685) 00:10:07.921 7.968 - 8.019: 30.8433% ( 419) 00:10:07.921 8.019 - 8.071: 33.8107% ( 259) 00:10:07.921 8.071 - 8.122: 36.4230% ( 228) 00:10:07.921 8.122 - 8.173: 38.1416% ( 150) 00:10:07.921 8.173 - 8.225: 39.4019% ( 110) 00:10:07.921 8.225 - 8.276: 42.1402% ( 239) 00:10:07.921 8.276 - 8.328: 47.4794% ( 466) 00:10:07.921 8.328 - 8.379: 51.9936% ( 394) 00:10:07.921 8.379 - 8.431: 54.4111% ( 211) 00:10:07.921 8.431 - 8.482: 56.1755% ( 154) 00:10:07.921 8.482 - 8.533: 60.7814% ( 402) 00:10:07.921 8.533 - 8.585: 67.7016% ( 604) 00:10:07.921 8.585 - 8.636: 72.5940% ( 427) 00:10:07.921 8.636 - 8.688: 75.3093% ( 237) 00:10:07.921 8.688 - 8.739: 77.6237% ( 202) 00:10:07.921 8.739 - 8.790: 79.5600% ( 169) 00:10:07.921 8.790 - 8.842: 81.7369% ( 190) 00:10:07.921 8.842 - 8.893: 83.4555% ( 150) 00:10:07.921 8.893 - 8.945: 85.0481% ( 139) 00:10:07.921 8.945 - 8.996: 86.6865% ( 143) 00:10:07.921 8.996 - 9.047: 88.3478% ( 145) 00:10:07.921 9.047 - 9.099: 89.5050% ( 101) 00:10:07.921 9.099 - 9.150: 90.3987% ( 78) 00:10:07.921 9.150 - 9.202: 91.3268% ( 81) 00:10:07.921 9.202 - 9.253: 91.8767% ( 48) 00:10:07.921 9.253 - 9.304: 92.4152% ( 47) 00:10:07.921 9.304 - 9.356: 92.9079% ( 43) 00:10:07.921 9.356 - 9.407: 93.4005% ( 43) 00:10:07.921 9.407 - 9.459: 93.6526% ( 22) 00:10:07.921 9.459 - 9.510: 93.8932% ( 21) 00:10:07.921 9.510 - 9.561: 94.1797% ( 25) 00:10:07.921 9.561 - 9.613: 94.2942% ( 10) 00:10:07.921 9.613 - 9.664: 94.4546% ( 14) 00:10:07.921 9.664 - 9.716: 94.5692% ( 10) 00:10:07.921 9.716 - 9.767: 94.6838% ( 10) 00:10:07.921 9.767 - 9.818: 94.7411% ( 5) 00:10:07.921 9.818 - 9.870: 94.7984% ( 5) 00:10:07.921 9.870 - 9.921: 94.9015% ( 9) 00:10:07.921 9.921 - 9.973: 94.9129% ( 1) 00:10:07.921 9.973 - 10.024: 94.9473% ( 3) 00:10:07.921 10.024 - 10.076: 94.9702% ( 2) 00:10:07.921 10.076 - 10.127: 95.0046% ( 3) 00:10:07.921 10.127 - 10.178: 95.0504% ( 4) 00:10:07.921 10.178 - 10.230: 95.0962% ( 4) 00:10:07.921 10.230 - 10.281: 95.1306% ( 3) 00:10:07.921 10.281 - 10.333: 95.1650% ( 3) 00:10:07.921 10.333 - 10.384: 95.1764% ( 1) 00:10:07.921 10.384 - 10.435: 95.1994% ( 2) 00:10:07.921 10.435 - 10.487: 95.2337% ( 3) 00:10:07.921 10.487 - 10.538: 95.2566% ( 2) 00:10:07.921 10.538 - 10.590: 95.2910% ( 3) 00:10:07.921 10.590 - 10.641: 95.3483% ( 5) 00:10:07.921 10.641 - 10.692: 95.3827% ( 3) 00:10:07.921 10.692 - 10.744: 95.4056% ( 2) 00:10:07.921 10.744 - 10.795: 95.4170% ( 1) 00:10:07.921 10.795 - 10.847: 95.4400% ( 2) 00:10:07.921 10.898 - 10.949: 95.4743% ( 3) 00:10:07.921 10.949 - 11.001: 95.4858% ( 1) 00:10:07.921 11.052 - 11.104: 95.5087% ( 2) 00:10:07.921 11.155 - 11.206: 95.5202% ( 1) 00:10:07.921 11.206 - 11.258: 95.5316% ( 1) 00:10:07.921 11.258 - 11.309: 95.5660% ( 3) 00:10:07.921 11.309 - 11.361: 95.5775% ( 1) 00:10:07.921 11.361 - 11.412: 95.5889% ( 1) 00:10:07.921 11.412 - 11.463: 95.6004% ( 1) 00:10:07.922 11.463 - 11.515: 95.6118% ( 1) 00:10:07.922 11.566 - 11.618: 95.6233% ( 1) 00:10:07.922 11.772 - 11.823: 95.6462% ( 2) 00:10:07.922 11.823 - 11.875: 95.6577% ( 1) 00:10:07.922 11.926 - 11.978: 95.6806% ( 2) 00:10:07.922 12.029 - 12.080: 95.6920% ( 1) 00:10:07.922 12.132 - 12.183: 95.7035% ( 1) 00:10:07.922 12.183 - 12.235: 95.7493% ( 4) 00:10:07.922 12.235 - 12.286: 95.7837% ( 3) 00:10:07.922 12.286 - 12.337: 95.7951% ( 1) 00:10:07.922 12.337 - 12.389: 95.8181% ( 2) 00:10:07.922 12.389 - 12.440: 95.8753% ( 5) 00:10:07.922 12.543 - 12.594: 95.8868% ( 1) 00:10:07.922 12.594 - 12.646: 95.8983% ( 1) 00:10:07.922 12.749 - 12.800: 95.9441% ( 4) 00:10:07.922 12.800 - 12.851: 95.9555% ( 1) 00:10:07.922 12.954 - 13.006: 95.9670% ( 1) 00:10:07.922 13.108 - 13.160: 95.9899% ( 2) 00:10:07.922 13.160 - 13.263: 96.0243% ( 3) 00:10:07.922 13.263 - 13.365: 96.1159% ( 8) 00:10:07.922 13.365 - 13.468: 96.1503% ( 3) 00:10:07.922 13.468 - 13.571: 96.2076% ( 5) 00:10:07.922 13.571 - 13.674: 96.2534% ( 4) 00:10:07.922 13.674 - 13.777: 96.2993% ( 4) 00:10:07.922 13.777 - 13.880: 96.3795% ( 7) 00:10:07.922 13.880 - 13.982: 96.4597% ( 7) 00:10:07.922 13.982 - 14.085: 96.5284% ( 6) 00:10:07.922 14.085 - 14.188: 96.6201% ( 8) 00:10:07.922 14.188 - 14.291: 96.7117% ( 8) 00:10:07.922 14.291 - 14.394: 96.7576% ( 4) 00:10:07.922 14.394 - 14.496: 96.8378% ( 7) 00:10:07.922 14.496 - 14.599: 96.8951% ( 5) 00:10:07.922 14.599 - 14.702: 96.9294% ( 3) 00:10:07.922 14.702 - 14.805: 96.9982% ( 6) 00:10:07.922 14.805 - 14.908: 97.0898% ( 8) 00:10:07.922 14.908 - 15.010: 97.1700% ( 7) 00:10:07.922 15.010 - 15.113: 97.2388% ( 6) 00:10:07.922 15.113 - 15.216: 97.2961% ( 5) 00:10:07.922 15.216 - 15.319: 97.3533% ( 5) 00:10:07.922 15.319 - 15.422: 97.3877% ( 3) 00:10:07.922 15.422 - 15.524: 97.4106% ( 2) 00:10:07.922 15.524 - 15.627: 97.4335% ( 2) 00:10:07.922 15.627 - 15.730: 97.4450% ( 1) 00:10:07.922 15.730 - 15.833: 97.4565% ( 1) 00:10:07.922 15.936 - 16.039: 97.4908% ( 3) 00:10:07.922 16.039 - 16.141: 97.5023% ( 1) 00:10:07.922 16.141 - 16.244: 97.5252% ( 2) 00:10:07.922 16.244 - 16.347: 97.5596% ( 3) 00:10:07.922 16.450 - 16.553: 97.5710% ( 1) 00:10:07.922 16.655 - 16.758: 97.5825% ( 1) 00:10:07.922 16.758 - 16.861: 97.5940% ( 1) 00:10:07.922 16.861 - 16.964: 97.6512% ( 5) 00:10:07.922 16.964 - 17.067: 97.6627% ( 1) 00:10:07.922 17.067 - 17.169: 97.6742% ( 1) 00:10:07.922 17.169 - 17.272: 97.6856% ( 1) 00:10:07.922 17.478 - 17.581: 97.6971% ( 1) 00:10:07.922 17.581 - 17.684: 97.7085% ( 1) 00:10:07.922 17.684 - 17.786: 97.7429% ( 3) 00:10:07.922 17.786 - 17.889: 97.7544% ( 1) 00:10:07.922 17.992 - 18.095: 97.7658% ( 1) 00:10:07.922 18.403 - 18.506: 97.7773% ( 1) 00:10:07.922 18.506 - 18.609: 97.8002% ( 2) 00:10:07.922 18.609 - 18.712: 97.8116% ( 1) 00:10:07.922 18.712 - 18.814: 97.8231% ( 1) 00:10:07.922 18.814 - 18.917: 97.8460% ( 2) 00:10:07.922 18.917 - 19.020: 97.8575% ( 1) 00:10:07.922 19.020 - 19.123: 97.8804% ( 2) 00:10:07.922 19.123 - 19.226: 97.9491% ( 6) 00:10:07.922 19.226 - 19.329: 98.0637% ( 10) 00:10:07.922 19.329 - 19.431: 98.1324% ( 6) 00:10:07.922 19.431 - 19.534: 98.1897% ( 5) 00:10:07.922 19.534 - 19.637: 98.2012% ( 1) 00:10:07.922 19.637 - 19.740: 98.2585% ( 5) 00:10:07.922 19.740 - 19.843: 98.2699% ( 1) 00:10:07.922 19.843 - 19.945: 98.2814% ( 1) 00:10:07.922 19.945 - 20.048: 98.2929% ( 1) 00:10:07.922 20.048 - 20.151: 98.3501% ( 5) 00:10:07.922 20.151 - 20.254: 98.4303% ( 7) 00:10:07.922 20.254 - 20.357: 98.5335% ( 9) 00:10:07.922 20.357 - 20.459: 98.6137% ( 7) 00:10:07.922 20.459 - 20.562: 98.7168% ( 9) 00:10:07.922 20.562 - 20.665: 98.8313% ( 10) 00:10:07.922 20.665 - 20.768: 98.9115% ( 7) 00:10:07.922 20.768 - 20.871: 99.0147% ( 9) 00:10:07.922 20.871 - 20.973: 99.0261% ( 1) 00:10:07.922 20.973 - 21.076: 99.0490% ( 2) 00:10:07.922 21.076 - 21.179: 99.0605% ( 1) 00:10:07.922 21.179 - 21.282: 99.0834% ( 2) 00:10:07.922 21.282 - 21.385: 99.1292% ( 4) 00:10:07.922 21.385 - 21.488: 99.1636% ( 3) 00:10:07.922 21.488 - 21.590: 99.1751% ( 1) 00:10:07.922 21.693 - 21.796: 99.1865% ( 1) 00:10:07.922 21.796 - 21.899: 99.1980% ( 1) 00:10:07.922 21.899 - 22.002: 99.2094% ( 1) 00:10:07.922 22.002 - 22.104: 99.2209% ( 1) 00:10:07.922 22.207 - 22.310: 99.2324% ( 1) 00:10:07.922 22.310 - 22.413: 99.2438% ( 1) 00:10:07.922 22.618 - 22.721: 99.2553% ( 1) 00:10:07.922 23.133 - 23.235: 99.2667% ( 1) 00:10:07.922 23.338 - 23.441: 99.3011% ( 3) 00:10:07.922 23.852 - 23.955: 99.3126% ( 1) 00:10:07.922 23.955 - 24.058: 99.3240% ( 1) 00:10:07.922 24.058 - 24.161: 99.3469% ( 2) 00:10:07.922 24.161 - 24.263: 99.3584% ( 1) 00:10:07.922 24.263 - 24.366: 99.3698% ( 1) 00:10:07.922 24.366 - 24.469: 99.3928% ( 2) 00:10:07.922 24.469 - 24.572: 99.4042% ( 1) 00:10:07.922 24.572 - 24.675: 99.4615% ( 5) 00:10:07.922 24.675 - 24.778: 99.5532% ( 8) 00:10:07.922 24.778 - 24.880: 99.6104% ( 5) 00:10:07.922 24.880 - 24.983: 99.6792% ( 6) 00:10:07.922 24.983 - 25.086: 99.7250% ( 4) 00:10:07.922 25.086 - 25.189: 99.7365% ( 1) 00:10:07.922 25.189 - 25.292: 99.7479% ( 1) 00:10:07.922 25.292 - 25.394: 99.7594% ( 1) 00:10:07.922 25.394 - 25.497: 99.7709% ( 1) 00:10:07.922 25.497 - 25.600: 99.7938% ( 2) 00:10:07.922 25.600 - 25.703: 99.8052% ( 1) 00:10:07.922 25.908 - 26.011: 99.8167% ( 1) 00:10:07.922 26.114 - 26.217: 99.8281% ( 1) 00:10:07.922 26.320 - 26.525: 99.8396% ( 1) 00:10:07.922 27.759 - 27.965: 99.8511% ( 1) 00:10:07.922 32.283 - 32.488: 99.8625% ( 1) 00:10:07.922 32.694 - 32.900: 99.8740% ( 1) 00:10:07.922 33.311 - 33.516: 99.8854% ( 1) 00:10:07.922 34.956 - 35.161: 99.8969% ( 1) 00:10:07.922 36.806 - 37.012: 99.9083% ( 1) 00:10:07.922 37.629 - 37.835: 99.9198% ( 1) 00:10:07.922 39.685 - 39.891: 99.9313% ( 1) 00:10:07.922 41.330 - 41.536: 99.9427% ( 1) 00:10:07.922 42.769 - 42.975: 99.9542% ( 1) 00:10:07.922 57.574 - 57.986: 99.9656% ( 1) 00:10:07.922 99.521 - 99.933: 99.9771% ( 1) 00:10:07.922 106.101 - 106.924: 99.9885% ( 1) 00:10:07.922 137.356 - 138.178: 100.0000% ( 1) 00:10:07.922 00:10:07.922 ************************************ 00:10:07.922 END TEST nvme_overhead 00:10:07.922 ************************************ 00:10:07.922 00:10:07.922 real 0m1.294s 00:10:07.922 user 0m1.091s 00:10:07.922 sys 0m0.158s 00:10:07.922 12:03:55 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.922 12:03:55 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:07.922 12:03:55 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:07.922 12:03:55 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:10:07.922 12:03:55 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.922 12:03:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:07.922 ************************************ 00:10:07.922 START TEST nvme_arbitration 00:10:07.922 ************************************ 00:10:07.922 12:03:55 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:11.215 Initializing NVMe Controllers 00:10:11.215 Attached to 0000:00:10.0 00:10:11.215 Attached to 0000:00:11.0 00:10:11.215 Attached to 0000:00:13.0 00:10:11.215 Attached to 0000:00:12.0 00:10:11.215 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:11.215 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:11.215 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:11.215 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:11.215 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:11.215 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:11.215 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:11.215 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:11.215 Initialization complete. Launching workers. 00:10:11.215 Starting thread on core 1 with urgent priority queue 00:10:11.215 Starting thread on core 2 with urgent priority queue 00:10:11.215 Starting thread on core 3 with urgent priority queue 00:10:11.215 Starting thread on core 0 with urgent priority queue 00:10:11.215 QEMU NVMe Ctrl (12340 ) core 0: 576.00 IO/s 173.61 secs/100000 ios 00:10:11.215 QEMU NVMe Ctrl (12342 ) core 0: 576.00 IO/s 173.61 secs/100000 ios 00:10:11.215 QEMU NVMe Ctrl (12341 ) core 1: 554.67 IO/s 180.29 secs/100000 ios 00:10:11.215 QEMU NVMe Ctrl (12342 ) core 1: 554.67 IO/s 180.29 secs/100000 ios 00:10:11.215 QEMU NVMe Ctrl (12343 ) core 2: 490.67 IO/s 203.80 secs/100000 ios 00:10:11.215 QEMU NVMe Ctrl (12342 ) core 3: 576.00 IO/s 173.61 secs/100000 ios 00:10:11.215 ======================================================== 00:10:11.215 00:10:11.215 00:10:11.215 real 0m3.442s 00:10:11.215 user 0m9.485s 00:10:11.215 sys 0m0.152s 00:10:11.215 12:03:59 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.215 12:03:59 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:11.215 ************************************ 00:10:11.215 END TEST nvme_arbitration 00:10:11.215 ************************************ 00:10:11.474 12:03:59 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:11.474 12:03:59 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:11.474 12:03:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.474 12:03:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:11.474 ************************************ 00:10:11.474 START TEST nvme_single_aen 00:10:11.474 ************************************ 00:10:11.474 12:03:59 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:11.732 Asynchronous Event Request test 00:10:11.732 Attached to 0000:00:10.0 00:10:11.732 Attached to 0000:00:11.0 00:10:11.732 Attached to 0000:00:13.0 00:10:11.732 Attached to 0000:00:12.0 00:10:11.732 Reset controller to setup AER completions for this process 00:10:11.732 Registering asynchronous event callbacks... 00:10:11.732 Getting orig temperature thresholds of all controllers 00:10:11.732 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.732 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.732 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.732 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.732 Setting all controllers temperature threshold low to trigger AER 00:10:11.732 Waiting for all controllers temperature threshold to be set lower 00:10:11.732 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.732 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:11.732 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.732 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:11.732 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.732 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:11.732 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.732 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:11.732 Waiting for all controllers to trigger AER and reset threshold 00:10:11.732 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.732 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.732 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.732 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.732 Cleaning up... 00:10:11.732 00:10:11.732 real 0m0.293s 00:10:11.732 user 0m0.110s 00:10:11.732 sys 0m0.142s 00:10:11.732 12:03:59 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.732 ************************************ 00:10:11.732 END TEST nvme_single_aen 00:10:11.732 ************************************ 00:10:11.732 12:03:59 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:11.732 12:03:59 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:11.732 12:03:59 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:11.732 12:03:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.732 12:03:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:11.732 ************************************ 00:10:11.732 START TEST nvme_doorbell_aers 00:10:11.732 ************************************ 00:10:11.732 12:03:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:10:11.732 12:03:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:11.732 12:03:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:11.732 12:03:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:11.732 12:03:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:11.732 12:03:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:11.732 12:03:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:10:11.733 12:03:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:11.733 12:03:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:11.733 12:03:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:11.733 12:03:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:11.733 12:03:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:11.733 12:03:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:11.733 12:03:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:12.299 [2024-07-26 12:03:59.981103] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:22.262 Executing: test_write_invalid_db 00:10:22.262 Waiting for AER completion... 00:10:22.262 Failure: test_write_invalid_db 00:10:22.262 00:10:22.262 Executing: test_invalid_db_write_overflow_sq 00:10:22.262 Waiting for AER completion... 00:10:22.262 Failure: test_invalid_db_write_overflow_sq 00:10:22.262 00:10:22.262 Executing: test_invalid_db_write_overflow_cq 00:10:22.262 Waiting for AER completion... 00:10:22.262 Failure: test_invalid_db_write_overflow_cq 00:10:22.262 00:10:22.262 12:04:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:22.262 12:04:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:22.262 [2024-07-26 12:04:10.025382] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:32.232 Executing: test_write_invalid_db 00:10:32.232 Waiting for AER completion... 00:10:32.232 Failure: test_write_invalid_db 00:10:32.232 00:10:32.232 Executing: test_invalid_db_write_overflow_sq 00:10:32.232 Waiting for AER completion... 00:10:32.232 Failure: test_invalid_db_write_overflow_sq 00:10:32.232 00:10:32.232 Executing: test_invalid_db_write_overflow_cq 00:10:32.232 Waiting for AER completion... 00:10:32.232 Failure: test_invalid_db_write_overflow_cq 00:10:32.232 00:10:32.232 12:04:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:32.232 12:04:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:32.232 [2024-07-26 12:04:20.088086] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:42.202 Executing: test_write_invalid_db 00:10:42.202 Waiting for AER completion... 00:10:42.202 Failure: test_write_invalid_db 00:10:42.202 00:10:42.202 Executing: test_invalid_db_write_overflow_sq 00:10:42.202 Waiting for AER completion... 00:10:42.202 Failure: test_invalid_db_write_overflow_sq 00:10:42.202 00:10:42.202 Executing: test_invalid_db_write_overflow_cq 00:10:42.202 Waiting for AER completion... 00:10:42.202 Failure: test_invalid_db_write_overflow_cq 00:10:42.202 00:10:42.202 12:04:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:42.202 12:04:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:42.202 [2024-07-26 12:04:30.145365] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:52.173 Executing: test_write_invalid_db 00:10:52.173 Waiting for AER completion... 00:10:52.173 Failure: test_write_invalid_db 00:10:52.173 00:10:52.173 Executing: test_invalid_db_write_overflow_sq 00:10:52.173 Waiting for AER completion... 00:10:52.173 Failure: test_invalid_db_write_overflow_sq 00:10:52.173 00:10:52.173 Executing: test_invalid_db_write_overflow_cq 00:10:52.173 Waiting for AER completion... 00:10:52.173 Failure: test_invalid_db_write_overflow_cq 00:10:52.173 00:10:52.173 00:10:52.173 real 0m40.324s 00:10:52.173 user 0m30.124s 00:10:52.173 sys 0m9.833s 00:10:52.173 12:04:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.173 12:04:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:52.173 ************************************ 00:10:52.173 END TEST nvme_doorbell_aers 00:10:52.173 ************************************ 00:10:52.173 12:04:39 nvme -- nvme/nvme.sh@97 -- # uname 00:10:52.173 12:04:39 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:52.173 12:04:39 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:52.173 12:04:39 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:10:52.173 12:04:39 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.173 12:04:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:52.173 ************************************ 00:10:52.173 START TEST nvme_multi_aen 00:10:52.173 ************************************ 00:10:52.173 12:04:39 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:52.433 [2024-07-26 12:04:40.237131] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:52.433 [2024-07-26 12:04:40.237270] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:52.433 [2024-07-26 12:04:40.237292] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:52.433 [2024-07-26 12:04:40.239088] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:52.433 [2024-07-26 12:04:40.239143] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:52.433 [2024-07-26 12:04:40.239163] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:52.433 [2024-07-26 12:04:40.240804] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:52.433 [2024-07-26 12:04:40.240970] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:52.433 [2024-07-26 12:04:40.241088] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:52.433 [2024-07-26 12:04:40.242593] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:52.433 [2024-07-26 12:04:40.242755] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:52.433 [2024-07-26 12:04:40.242857] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68679) is not found. Dropping the request. 00:10:52.433 Child process pid: 69201 00:10:52.692 [Child] Asynchronous Event Request test 00:10:52.692 [Child] Attached to 0000:00:10.0 00:10:52.692 [Child] Attached to 0000:00:11.0 00:10:52.692 [Child] Attached to 0000:00:13.0 00:10:52.692 [Child] Attached to 0000:00:12.0 00:10:52.692 [Child] Registering asynchronous event callbacks... 00:10:52.692 [Child] Getting orig temperature thresholds of all controllers 00:10:52.692 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.692 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.692 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.692 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.692 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:52.692 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.692 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.692 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.692 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.692 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.692 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.692 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.692 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.692 [Child] Cleaning up... 00:10:52.692 Asynchronous Event Request test 00:10:52.692 Attached to 0000:00:10.0 00:10:52.692 Attached to 0000:00:11.0 00:10:52.692 Attached to 0000:00:13.0 00:10:52.692 Attached to 0000:00:12.0 00:10:52.692 Reset controller to setup AER completions for this process 00:10:52.692 Registering asynchronous event callbacks... 00:10:52.692 Getting orig temperature thresholds of all controllers 00:10:52.692 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.692 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.692 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.692 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:52.693 Setting all controllers temperature threshold low to trigger AER 00:10:52.693 Waiting for all controllers temperature threshold to be set lower 00:10:52.693 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.693 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:52.693 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.693 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:52.693 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.693 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:52.693 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:52.693 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:52.693 Waiting for all controllers to trigger AER and reset threshold 00:10:52.693 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.693 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.693 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.693 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:52.693 Cleaning up... 00:10:52.693 00:10:52.693 real 0m0.600s 00:10:52.693 user 0m0.207s 00:10:52.693 sys 0m0.283s 00:10:52.693 12:04:40 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.693 12:04:40 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:52.693 ************************************ 00:10:52.693 END TEST nvme_multi_aen 00:10:52.693 ************************************ 00:10:52.693 12:04:40 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:52.693 12:04:40 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:52.693 12:04:40 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.693 12:04:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:52.693 ************************************ 00:10:52.693 START TEST nvme_startup 00:10:52.693 ************************************ 00:10:52.693 12:04:40 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:52.951 Initializing NVMe Controllers 00:10:52.951 Attached to 0000:00:10.0 00:10:52.951 Attached to 0000:00:11.0 00:10:52.951 Attached to 0000:00:13.0 00:10:52.951 Attached to 0000:00:12.0 00:10:52.951 Initialization complete. 00:10:52.951 Time used:177657.469 (us). 00:10:53.210 00:10:53.210 real 0m0.282s 00:10:53.210 user 0m0.104s 00:10:53.210 sys 0m0.128s 00:10:53.210 12:04:40 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.210 12:04:40 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:53.210 ************************************ 00:10:53.210 END TEST nvme_startup 00:10:53.210 ************************************ 00:10:53.210 12:04:40 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:53.210 12:04:40 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:53.210 12:04:40 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.210 12:04:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:53.210 ************************************ 00:10:53.210 START TEST nvme_multi_secondary 00:10:53.210 ************************************ 00:10:53.210 12:04:40 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:10:53.210 12:04:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=69257 00:10:53.210 12:04:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=69258 00:10:53.210 12:04:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:53.210 12:04:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:53.210 12:04:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:56.497 Initializing NVMe Controllers 00:10:56.497 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:56.497 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:56.497 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:56.497 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:56.497 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:56.497 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:56.497 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:56.497 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:56.497 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:56.497 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:56.497 Initialization complete. Launching workers. 00:10:56.497 ======================================================== 00:10:56.497 Latency(us) 00:10:56.497 Device Information : IOPS MiB/s Average min max 00:10:56.497 PCIE (0000:00:10.0) NSID 1 from core 1: 5048.23 19.72 3167.04 947.54 7277.11 00:10:56.497 PCIE (0000:00:11.0) NSID 1 from core 1: 5048.23 19.72 3169.02 979.76 7401.07 00:10:56.498 PCIE (0000:00:13.0) NSID 1 from core 1: 5048.23 19.72 3169.64 963.47 7290.29 00:10:56.498 PCIE (0000:00:12.0) NSID 1 from core 1: 5048.23 19.72 3169.67 961.37 6635.62 00:10:56.498 PCIE (0000:00:12.0) NSID 2 from core 1: 5048.23 19.72 3169.98 980.75 6390.09 00:10:56.498 PCIE (0000:00:12.0) NSID 3 from core 1: 5048.23 19.72 3170.10 970.75 6533.68 00:10:56.498 ======================================================== 00:10:56.498 Total : 30289.36 118.32 3169.24 947.54 7401.07 00:10:56.498 00:10:56.756 Initializing NVMe Controllers 00:10:56.756 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:56.756 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:56.756 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:56.756 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:56.756 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:56.756 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:56.756 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:56.756 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:56.756 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:56.756 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:56.756 Initialization complete. Launching workers. 00:10:56.756 ======================================================== 00:10:56.756 Latency(us) 00:10:56.756 Device Information : IOPS MiB/s Average min max 00:10:56.756 PCIE (0000:00:10.0) NSID 1 from core 2: 3301.00 12.89 4845.27 1371.53 12663.92 00:10:56.756 PCIE (0000:00:11.0) NSID 1 from core 2: 3301.00 12.89 4846.86 1247.39 12208.72 00:10:56.756 PCIE (0000:00:13.0) NSID 1 from core 2: 3301.00 12.89 4846.84 1248.65 12000.89 00:10:56.756 PCIE (0000:00:12.0) NSID 1 from core 2: 3301.00 12.89 4846.35 1205.96 10818.28 00:10:56.756 PCIE (0000:00:12.0) NSID 2 from core 2: 3301.00 12.89 4846.71 1238.13 10718.55 00:10:56.756 PCIE (0000:00:12.0) NSID 3 from core 2: 3301.00 12.89 4846.54 1203.01 10804.58 00:10:56.756 ======================================================== 00:10:56.756 Total : 19806.02 77.37 4846.43 1203.01 12663.92 00:10:56.756 00:10:56.756 12:04:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 69257 00:10:58.675 Initializing NVMe Controllers 00:10:58.675 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:58.675 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:58.675 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:58.675 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:58.675 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:58.675 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:58.675 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:58.675 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:58.675 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:58.675 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:58.675 Initialization complete. Launching workers. 00:10:58.675 ======================================================== 00:10:58.675 Latency(us) 00:10:58.675 Device Information : IOPS MiB/s Average min max 00:10:58.675 PCIE (0000:00:10.0) NSID 1 from core 0: 8568.30 33.47 1865.83 921.71 6411.43 00:10:58.675 PCIE (0000:00:11.0) NSID 1 from core 0: 8568.30 33.47 1866.90 947.00 6052.53 00:10:58.675 PCIE (0000:00:13.0) NSID 1 from core 0: 8568.30 33.47 1866.87 917.61 6103.67 00:10:58.675 PCIE (0000:00:12.0) NSID 1 from core 0: 8568.30 33.47 1866.83 853.80 6272.28 00:10:58.675 PCIE (0000:00:12.0) NSID 2 from core 0: 8568.30 33.47 1866.80 792.88 6495.98 00:10:58.675 PCIE (0000:00:12.0) NSID 3 from core 0: 8568.30 33.47 1866.77 723.88 6243.72 00:10:58.675 ======================================================== 00:10:58.675 Total : 51409.81 200.82 1866.67 723.88 6495.98 00:10:58.675 00:10:58.675 12:04:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 69258 00:10:58.675 12:04:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=69326 00:10:58.675 12:04:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:58.675 12:04:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=69327 00:10:58.675 12:04:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:58.675 12:04:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:01.959 Initializing NVMe Controllers 00:11:01.959 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:01.959 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:01.959 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:01.959 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:01.959 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:01.959 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:01.959 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:01.959 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:01.959 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:01.959 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:01.959 Initialization complete. Launching workers. 00:11:01.959 ======================================================== 00:11:01.959 Latency(us) 00:11:01.959 Device Information : IOPS MiB/s Average min max 00:11:01.959 PCIE (0000:00:10.0) NSID 1 from core 1: 5140.58 20.08 3110.18 962.38 9196.97 00:11:01.959 PCIE (0000:00:11.0) NSID 1 from core 1: 5140.58 20.08 3112.08 977.30 9400.87 00:11:01.959 PCIE (0000:00:13.0) NSID 1 from core 1: 5140.58 20.08 3112.46 990.56 10598.36 00:11:01.959 PCIE (0000:00:12.0) NSID 1 from core 1: 5140.58 20.08 3112.75 986.02 10885.83 00:11:01.959 PCIE (0000:00:12.0) NSID 2 from core 1: 5140.58 20.08 3112.89 986.54 9040.14 00:11:01.959 PCIE (0000:00:12.0) NSID 3 from core 1: 5145.91 20.10 3109.75 954.54 8795.35 00:11:01.959 ======================================================== 00:11:01.959 Total : 30848.80 120.50 3111.68 954.54 10885.83 00:11:01.959 00:11:01.959 Initializing NVMe Controllers 00:11:01.959 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:01.959 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:01.959 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:01.959 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:01.959 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:01.959 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:01.959 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:01.959 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:01.959 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:01.959 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:01.959 Initialization complete. Launching workers. 00:11:01.959 ======================================================== 00:11:01.959 Latency(us) 00:11:01.959 Device Information : IOPS MiB/s Average min max 00:11:01.959 PCIE (0000:00:10.0) NSID 1 from core 0: 4931.52 19.26 3241.86 1030.54 9069.54 00:11:01.959 PCIE (0000:00:11.0) NSID 1 from core 0: 4931.52 19.26 3243.67 1055.90 8792.19 00:11:01.959 PCIE (0000:00:13.0) NSID 1 from core 0: 4931.52 19.26 3243.78 1063.20 8487.49 00:11:01.959 PCIE (0000:00:12.0) NSID 1 from core 0: 4931.52 19.26 3243.72 1043.75 8646.76 00:11:01.959 PCIE (0000:00:12.0) NSID 2 from core 0: 4931.52 19.26 3243.71 1051.04 9032.11 00:11:01.959 PCIE (0000:00:12.0) NSID 3 from core 0: 4931.52 19.26 3243.66 1064.55 9115.83 00:11:01.960 ======================================================== 00:11:01.960 Total : 29589.09 115.58 3243.40 1030.54 9115.83 00:11:01.960 00:11:04.490 Initializing NVMe Controllers 00:11:04.490 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:04.490 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:04.490 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:04.490 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:04.490 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:04.490 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:04.490 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:04.490 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:04.490 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:04.490 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:04.490 Initialization complete. Launching workers. 00:11:04.490 ======================================================== 00:11:04.490 Latency(us) 00:11:04.490 Device Information : IOPS MiB/s Average min max 00:11:04.490 PCIE (0000:00:10.0) NSID 1 from core 2: 3110.19 12.15 5142.13 1045.40 13941.86 00:11:04.490 PCIE (0000:00:11.0) NSID 1 from core 2: 3110.19 12.15 5143.94 1054.62 13364.62 00:11:04.490 PCIE (0000:00:13.0) NSID 1 from core 2: 3110.19 12.15 5143.58 1054.90 13264.76 00:11:04.490 PCIE (0000:00:12.0) NSID 1 from core 2: 3110.19 12.15 5143.98 1058.51 12773.19 00:11:04.490 PCIE (0000:00:12.0) NSID 2 from core 2: 3110.19 12.15 5143.87 1049.07 12843.19 00:11:04.490 PCIE (0000:00:12.0) NSID 3 from core 2: 3110.19 12.15 5143.52 1061.67 13819.49 00:11:04.490 ======================================================== 00:11:04.490 Total : 18661.13 72.90 5143.50 1045.40 13941.86 00:11:04.490 00:11:04.490 ************************************ 00:11:04.490 END TEST nvme_multi_secondary 00:11:04.490 ************************************ 00:11:04.490 12:04:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 69326 00:11:04.490 12:04:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 69327 00:11:04.490 00:11:04.490 real 0m11.028s 00:11:04.490 user 0m18.548s 00:11:04.490 sys 0m0.936s 00:11:04.490 12:04:52 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.490 12:04:52 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:04.490 12:04:52 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:04.490 12:04:52 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:04.490 12:04:52 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/68264 ]] 00:11:04.490 12:04:52 nvme -- common/autotest_common.sh@1090 -- # kill 68264 00:11:04.490 12:04:52 nvme -- common/autotest_common.sh@1091 -- # wait 68264 00:11:04.490 [2024-07-26 12:04:52.082330] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 [2024-07-26 12:04:52.082472] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 [2024-07-26 12:04:52.082523] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 [2024-07-26 12:04:52.082574] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 [2024-07-26 12:04:52.088569] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 [2024-07-26 12:04:52.088677] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 [2024-07-26 12:04:52.088724] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 [2024-07-26 12:04:52.088814] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 [2024-07-26 12:04:52.094318] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 [2024-07-26 12:04:52.094388] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 [2024-07-26 12:04:52.094417] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 [2024-07-26 12:04:52.094448] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 [2024-07-26 12:04:52.098799] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 [2024-07-26 12:04:52.098874] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 [2024-07-26 12:04:52.098903] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 [2024-07-26 12:04:52.098934] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69200) is not found. Dropping the request. 00:11:04.490 12:04:52 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:11:04.490 12:04:52 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:11:04.490 12:04:52 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:04.490 12:04:52 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:04.491 12:04:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.491 12:04:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:04.491 ************************************ 00:11:04.491 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:04.491 ************************************ 00:11:04.491 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:04.748 * Looking for test storage... 00:11:04.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=69482 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 69482 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 69482 ']' 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.748 12:04:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:05.007 [2024-07-26 12:04:52.747634] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:11:05.007 [2024-07-26 12:04:52.747944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69482 ] 00:11:05.007 [2024-07-26 12:04:52.938313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.264 [2024-07-26 12:04:53.175000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.264 [2024-07-26 12:04:53.175212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.264 [2024-07-26 12:04:53.175338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.264 [2024-07-26 12:04:53.175358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.197 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:06.197 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:11:06.197 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:06.197 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.197 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:06.455 nvme0n1 00:11:06.455 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.455 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:06.455 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XTsSg.txt 00:11:06.455 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:06.455 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.455 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:06.455 true 00:11:06.455 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.455 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:06.455 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721995494 00:11:06.455 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=69516 00:11:06.455 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:06.455 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:06.455 12:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:08.366 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:08.366 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.366 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:08.366 [2024-07-26 12:04:56.244989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:11:08.366 [2024-07-26 12:04:56.245444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:08.366 [2024-07-26 12:04:56.245567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:08.366 [2024-07-26 12:04:56.245758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.366 [2024-07-26 12:04:56.247745] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:08.366 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.366 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 69516 00:11:08.366 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 69516 00:11:08.366 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 69516 00:11:08.366 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:08.366 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:08.366 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:08.366 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.366 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:08.366 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.366 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:08.366 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XTsSg.txt 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XTsSg.txt 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 69482 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 69482 ']' 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 69482 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:08.625 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69482 00:11:08.625 killing process with pid 69482 00:11:08.626 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:08.626 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:08.626 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69482' 00:11:08.626 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 69482 00:11:08.626 12:04:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 69482 00:11:11.158 12:04:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:11.158 12:04:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:11.158 00:11:11.158 real 0m6.607s 00:11:11.158 user 0m22.495s 00:11:11.158 sys 0m0.775s 00:11:11.158 12:04:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.158 12:04:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:11.158 ************************************ 00:11:11.158 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:11.158 ************************************ 00:11:11.158 12:04:59 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:11.158 12:04:59 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:11.158 12:04:59 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:11.158 12:04:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.158 12:04:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:11.158 ************************************ 00:11:11.158 START TEST nvme_fio 00:11:11.158 ************************************ 00:11:11.158 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:11:11.158 12:04:59 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:11.158 12:04:59 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:11.158 12:04:59 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:11.158 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:11.158 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:11:11.158 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:11.158 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:11.158 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:11.416 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:11.416 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:11.416 12:04:59 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:11.416 12:04:59 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:11.416 12:04:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:11.416 12:04:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:11.416 12:04:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:11.674 12:04:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:11.674 12:04:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:11.933 12:04:59 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:11.933 12:04:59 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:11.933 12:04:59 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:12.191 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:12.191 fio-3.35 00:11:12.191 Starting 1 thread 00:11:15.502 00:11:15.502 test: (groupid=0, jobs=1): err= 0: pid=69668: Fri Jul 26 12:05:03 2024 00:11:15.502 read: IOPS=20.4k, BW=79.6MiB/s (83.5MB/s)(159MiB/2001msec) 00:11:15.502 slat (nsec): min=3981, max=57072, avg=5216.95, stdev=1849.13 00:11:15.502 clat (usec): min=190, max=11121, avg=3128.20, stdev=869.45 00:11:15.502 lat (usec): min=194, max=11161, avg=3133.42, stdev=870.48 00:11:15.502 clat percentiles (usec): 00:11:15.502 | 1.00th=[ 1680], 5.00th=[ 2409], 10.00th=[ 2737], 20.00th=[ 2835], 00:11:15.502 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3064], 00:11:15.502 | 70.00th=[ 3130], 80.00th=[ 3195], 90.00th=[ 3392], 95.00th=[ 4178], 00:11:15.502 | 99.00th=[ 8029], 99.50th=[ 8848], 99.90th=[10159], 99.95th=[10421], 00:11:15.502 | 99.99th=[10814] 00:11:15.502 bw ( KiB/s): min=77296, max=84872, per=98.81%, avg=80538.67, stdev=3903.99, samples=3 00:11:15.502 iops : min=19324, max=21218, avg=20134.67, stdev=976.00, samples=3 00:11:15.502 write: IOPS=20.3k, BW=79.4MiB/s (83.2MB/s)(159MiB/2001msec); 0 zone resets 00:11:15.502 slat (nsec): min=4116, max=81109, avg=5370.96, stdev=1807.76 00:11:15.502 clat (usec): min=292, max=10917, avg=3131.99, stdev=860.42 00:11:15.502 lat (usec): min=296, max=10937, avg=3137.36, stdev=861.43 00:11:15.502 clat percentiles (usec): 00:11:15.502 | 1.00th=[ 1663], 5.00th=[ 2442], 10.00th=[ 2737], 20.00th=[ 2835], 00:11:15.502 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3064], 00:11:15.502 | 70.00th=[ 3130], 80.00th=[ 3195], 90.00th=[ 3392], 95.00th=[ 4178], 00:11:15.502 | 99.00th=[ 7963], 99.50th=[ 8848], 99.90th=[10159], 99.95th=[10421], 00:11:15.502 | 99.99th=[10683] 00:11:15.502 bw ( KiB/s): min=77312, max=84728, per=99.12%, avg=80573.33, stdev=3787.85, samples=3 00:11:15.502 iops : min=19328, max=21182, avg=20143.33, stdev=946.96, samples=3 00:11:15.502 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:15.502 lat (msec) : 2=2.30%, 4=91.93%, 10=5.57%, 20=0.15% 00:11:15.502 cpu : usr=99.20%, sys=0.15%, ctx=5, majf=0, minf=606 00:11:15.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:15.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:15.502 issued rwts: total=40775,40664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:15.502 00:11:15.502 Run status group 0 (all jobs): 00:11:15.502 READ: bw=79.6MiB/s (83.5MB/s), 79.6MiB/s-79.6MiB/s (83.5MB/s-83.5MB/s), io=159MiB (167MB), run=2001-2001msec 00:11:15.502 WRITE: bw=79.4MiB/s (83.2MB/s), 79.4MiB/s-79.4MiB/s (83.2MB/s-83.2MB/s), io=159MiB (167MB), run=2001-2001msec 00:11:15.759 ----------------------------------------------------- 00:11:15.759 Suppressions used: 00:11:15.759 count bytes template 00:11:15.759 1 32 /usr/src/fio/parse.c 00:11:15.759 1 8 libtcmalloc_minimal.so 00:11:15.759 ----------------------------------------------------- 00:11:15.759 00:11:15.759 12:05:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:15.759 12:05:03 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:15.759 12:05:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:15.759 12:05:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:16.018 12:05:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:16.018 12:05:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:16.277 12:05:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:16.277 12:05:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:16.277 12:05:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:16.536 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:16.536 fio-3.35 00:11:16.536 Starting 1 thread 00:11:19.859 00:11:19.859 test: (groupid=0, jobs=1): err= 0: pid=69734: Fri Jul 26 12:05:07 2024 00:11:19.859 read: IOPS=19.7k, BW=77.1MiB/s (80.9MB/s)(154MiB/2001msec) 00:11:19.859 slat (nsec): min=3937, max=60072, avg=5185.38, stdev=1965.80 00:11:19.859 clat (usec): min=211, max=14215, avg=3224.57, stdev=925.48 00:11:19.859 lat (usec): min=216, max=14260, avg=3229.76, stdev=926.71 00:11:19.859 clat percentiles (usec): 00:11:19.859 | 1.00th=[ 1860], 5.00th=[ 2540], 10.00th=[ 2737], 20.00th=[ 2835], 00:11:19.859 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:11:19.859 | 70.00th=[ 3130], 80.00th=[ 3326], 90.00th=[ 3916], 95.00th=[ 4359], 00:11:19.859 | 99.00th=[ 7898], 99.50th=[ 8586], 99.90th=[ 9241], 99.95th=[10945], 00:11:19.859 | 99.99th=[13829] 00:11:19.859 bw ( KiB/s): min=75144, max=76864, per=96.21%, avg=75970.67, stdev=861.94, samples=3 00:11:19.859 iops : min=18786, max=19216, avg=18992.67, stdev=215.48, samples=3 00:11:19.859 write: IOPS=19.7k, BW=77.0MiB/s (80.7MB/s)(154MiB/2001msec); 0 zone resets 00:11:19.859 slat (nsec): min=4120, max=53976, avg=5336.27, stdev=1958.88 00:11:19.859 clat (usec): min=291, max=13947, avg=3236.31, stdev=942.99 00:11:19.859 lat (usec): min=296, max=13965, avg=3241.64, stdev=944.22 00:11:19.859 clat percentiles (usec): 00:11:19.859 | 1.00th=[ 1844], 5.00th=[ 2540], 10.00th=[ 2737], 20.00th=[ 2868], 00:11:19.859 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:11:19.859 | 70.00th=[ 3130], 80.00th=[ 3326], 90.00th=[ 3916], 95.00th=[ 4490], 00:11:19.859 | 99.00th=[ 8029], 99.50th=[ 8586], 99.90th=[ 9372], 99.95th=[11338], 00:11:19.859 | 99.99th=[13566] 00:11:19.859 bw ( KiB/s): min=75256, max=77224, per=96.55%, avg=76088.00, stdev=1018.61, samples=3 00:11:19.859 iops : min=18814, max=19306, avg=19022.00, stdev=254.65, samples=3 00:11:19.859 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:11:19.859 lat (msec) : 2=1.35%, 4=91.06%, 10=7.48%, 20=0.07% 00:11:19.859 cpu : usr=99.05%, sys=0.15%, ctx=3, majf=0, minf=606 00:11:19.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:19.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:19.859 issued rwts: total=39502,39424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:19.859 00:11:19.859 Run status group 0 (all jobs): 00:11:19.859 READ: bw=77.1MiB/s (80.9MB/s), 77.1MiB/s-77.1MiB/s (80.9MB/s-80.9MB/s), io=154MiB (162MB), run=2001-2001msec 00:11:19.859 WRITE: bw=77.0MiB/s (80.7MB/s), 77.0MiB/s-77.0MiB/s (80.7MB/s-80.7MB/s), io=154MiB (161MB), run=2001-2001msec 00:11:20.118 ----------------------------------------------------- 00:11:20.118 Suppressions used: 00:11:20.118 count bytes template 00:11:20.118 1 32 /usr/src/fio/parse.c 00:11:20.118 1 8 libtcmalloc_minimal.so 00:11:20.118 ----------------------------------------------------- 00:11:20.118 00:11:20.118 12:05:08 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:20.118 12:05:08 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:20.118 12:05:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:20.118 12:05:08 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:20.377 12:05:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:20.377 12:05:08 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:20.636 12:05:08 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:20.636 12:05:08 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:20.636 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:20.636 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:20.636 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:20.636 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:20.636 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:20.636 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:20.636 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:20.636 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:20.636 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:20.636 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:20.636 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:20.895 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:20.895 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:20.895 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:20.895 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:20.895 12:05:08 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:20.895 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:20.895 fio-3.35 00:11:20.895 Starting 1 thread 00:11:25.129 00:11:25.129 test: (groupid=0, jobs=1): err= 0: pid=69795: Fri Jul 26 12:05:12 2024 00:11:25.129 read: IOPS=19.7k, BW=77.0MiB/s (80.7MB/s)(154MiB/2001msec) 00:11:25.129 slat (nsec): min=3970, max=52702, avg=5179.79, stdev=2098.78 00:11:25.129 clat (usec): min=203, max=32574, avg=3208.65, stdev=1218.71 00:11:25.129 lat (usec): min=208, max=32579, avg=3213.83, stdev=1219.83 00:11:25.129 clat percentiles (usec): 00:11:25.129 | 1.00th=[ 2409], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2802], 00:11:25.129 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:11:25.129 | 70.00th=[ 3032], 80.00th=[ 3130], 90.00th=[ 3556], 95.00th=[ 5080], 00:11:25.129 | 99.00th=[ 8455], 99.50th=[ 8848], 99.90th=[11469], 99.95th=[24773], 00:11:25.129 | 99.99th=[28967] 00:11:25.129 bw ( KiB/s): min=69344, max=87160, per=96.56%, avg=76090.67, stdev=9662.64, samples=3 00:11:25.129 iops : min=17336, max=21790, avg=19022.67, stdev=2415.66, samples=3 00:11:25.129 write: IOPS=19.7k, BW=76.8MiB/s (80.6MB/s)(154MiB/2001msec); 0 zone resets 00:11:25.129 slat (nsec): min=4081, max=71854, avg=5337.45, stdev=2164.58 00:11:25.129 clat (usec): min=227, max=34082, avg=3261.69, stdev=1662.77 00:11:25.129 lat (usec): min=232, max=34087, avg=3267.03, stdev=1663.59 00:11:25.129 clat percentiles (usec): 00:11:25.129 | 1.00th=[ 2442], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2835], 00:11:25.129 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:11:25.129 | 70.00th=[ 3032], 80.00th=[ 3130], 90.00th=[ 3621], 95.00th=[ 5407], 00:11:25.129 | 99.00th=[ 8586], 99.50th=[ 8979], 99.90th=[32375], 99.95th=[33424], 00:11:25.129 | 99.99th=[33817] 00:11:25.129 bw ( KiB/s): min=69272, max=87528, per=96.90%, avg=76245.33, stdev=9861.45, samples=3 00:11:25.129 iops : min=17318, max=21884, avg=19062.00, stdev=2466.51, samples=3 00:11:25.129 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:11:25.129 lat (msec) : 2=0.21%, 4=92.30%, 10=7.22%, 20=0.06%, 50=0.16% 00:11:25.129 cpu : usr=99.15%, sys=0.05%, ctx=4, majf=0, minf=606 00:11:25.129 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:25.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.129 issued rwts: total=39419,39360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.129 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.129 00:11:25.129 Run status group 0 (all jobs): 00:11:25.129 READ: bw=77.0MiB/s (80.7MB/s), 77.0MiB/s-77.0MiB/s (80.7MB/s-80.7MB/s), io=154MiB (161MB), run=2001-2001msec 00:11:25.129 WRITE: bw=76.8MiB/s (80.6MB/s), 76.8MiB/s-76.8MiB/s (80.6MB/s-80.6MB/s), io=154MiB (161MB), run=2001-2001msec 00:11:25.129 ----------------------------------------------------- 00:11:25.129 Suppressions used: 00:11:25.129 count bytes template 00:11:25.129 1 32 /usr/src/fio/parse.c 00:11:25.129 1 8 libtcmalloc_minimal.so 00:11:25.129 ----------------------------------------------------- 00:11:25.129 00:11:25.129 12:05:12 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:25.129 12:05:12 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:25.129 12:05:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:25.129 12:05:12 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:25.386 12:05:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:25.386 12:05:13 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:25.644 12:05:13 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:25.644 12:05:13 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:25.644 12:05:13 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:25.916 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:25.916 fio-3.35 00:11:25.916 Starting 1 thread 00:11:32.526 00:11:32.526 test: (groupid=0, jobs=1): err= 0: pid=69856: Fri Jul 26 12:05:19 2024 00:11:32.526 read: IOPS=22.6k, BW=88.3MiB/s (92.6MB/s)(177MiB/2001msec) 00:11:32.526 slat (nsec): min=3737, max=43949, avg=4766.38, stdev=1410.88 00:11:32.526 clat (usec): min=225, max=10749, avg=2821.19, stdev=347.87 00:11:32.526 lat (usec): min=230, max=10792, avg=2825.96, stdev=348.17 00:11:32.526 clat percentiles (usec): 00:11:32.526 | 1.00th=[ 1893], 5.00th=[ 2311], 10.00th=[ 2573], 20.00th=[ 2704], 00:11:32.526 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:11:32.526 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 2999], 95.00th=[ 3097], 00:11:32.526 | 99.00th=[ 3884], 99.50th=[ 4359], 99.90th=[ 5669], 99.95th=[ 8455], 00:11:32.526 | 99.99th=[10552] 00:11:32.526 bw ( KiB/s): min=86475, max=94624, per=99.25%, avg=89739.67, stdev=4309.18, samples=3 00:11:32.526 iops : min=21618, max=23656, avg=22434.67, stdev=1077.58, samples=3 00:11:32.526 write: IOPS=22.5k, BW=87.8MiB/s (92.1MB/s)(176MiB/2001msec); 0 zone resets 00:11:32.526 slat (nsec): min=3824, max=46114, avg=4933.94, stdev=1461.03 00:11:32.527 clat (usec): min=201, max=10650, avg=2829.49, stdev=349.89 00:11:32.527 lat (usec): min=206, max=10672, avg=2834.43, stdev=350.21 00:11:32.527 clat percentiles (usec): 00:11:32.527 | 1.00th=[ 1909], 5.00th=[ 2343], 10.00th=[ 2573], 20.00th=[ 2704], 00:11:32.527 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:11:32.527 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3097], 00:11:32.527 | 99.00th=[ 3884], 99.50th=[ 4359], 99.90th=[ 6652], 99.95th=[ 8717], 00:11:32.527 | 99.99th=[10159] 00:11:32.527 bw ( KiB/s): min=86123, max=93920, per=99.95%, avg=89881.00, stdev=3906.09, samples=3 00:11:32.527 iops : min=21530, max=23480, avg=22470.00, stdev=976.88, samples=3 00:11:32.527 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:32.527 lat (msec) : 2=1.47%, 4=97.62%, 10=0.85%, 20=0.02% 00:11:32.527 cpu : usr=99.25%, sys=0.15%, ctx=2, majf=0, minf=604 00:11:32.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:32.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:32.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:32.527 issued rwts: total=45229,44984,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:32.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:32.527 00:11:32.527 Run status group 0 (all jobs): 00:11:32.527 READ: bw=88.3MiB/s (92.6MB/s), 88.3MiB/s-88.3MiB/s (92.6MB/s-92.6MB/s), io=177MiB (185MB), run=2001-2001msec 00:11:32.527 WRITE: bw=87.8MiB/s (92.1MB/s), 87.8MiB/s-87.8MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 00:11:32.527 ----------------------------------------------------- 00:11:32.527 Suppressions used: 00:11:32.527 count bytes template 00:11:32.527 1 32 /usr/src/fio/parse.c 00:11:32.527 1 8 libtcmalloc_minimal.so 00:11:32.527 ----------------------------------------------------- 00:11:32.527 00:11:32.527 12:05:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:32.527 12:05:19 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:32.527 00:11:32.527 real 0m20.767s 00:11:32.527 user 0m14.826s 00:11:32.527 sys 0m7.986s 00:11:32.527 12:05:19 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.527 12:05:19 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:32.527 ************************************ 00:11:32.527 END TEST nvme_fio 00:11:32.527 ************************************ 00:11:32.527 00:11:32.527 real 1m36.087s 00:11:32.527 user 3m44.694s 00:11:32.527 sys 0m25.357s 00:11:32.527 12:05:19 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.527 12:05:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:32.527 ************************************ 00:11:32.527 END TEST nvme 00:11:32.527 ************************************ 00:11:32.527 12:05:19 -- spdk/autotest.sh@221 -- # [[ 0 -eq 1 ]] 00:11:32.527 12:05:19 -- spdk/autotest.sh@225 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:32.527 12:05:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:32.527 12:05:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.527 12:05:19 -- common/autotest_common.sh@10 -- # set +x 00:11:32.527 ************************************ 00:11:32.527 START TEST nvme_scc 00:11:32.527 ************************************ 00:11:32.527 12:05:19 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:32.527 * Looking for test storage... 00:11:32.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:32.527 12:05:20 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:32.527 12:05:20 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:32.527 12:05:20 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:32.527 12:05:20 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:32.527 12:05:20 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:32.527 12:05:20 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.527 12:05:20 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.527 12:05:20 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.527 12:05:20 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.527 12:05:20 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.527 12:05:20 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.527 12:05:20 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:32.527 12:05:20 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.527 12:05:20 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:32.527 12:05:20 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:32.527 12:05:20 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:32.527 12:05:20 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:32.527 12:05:20 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:32.527 12:05:20 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:32.527 12:05:20 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:32.527 12:05:20 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:32.527 12:05:20 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:32.527 12:05:20 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.527 12:05:20 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:32.527 12:05:20 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:32.527 12:05:20 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:32.527 12:05:20 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:32.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.043 Waiting for block devices as requested 00:11:33.043 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.301 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.301 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.560 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:38.871 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:38.871 12:05:26 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:38.871 12:05:26 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:38.871 12:05:26 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:38.871 12:05:26 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:38.871 12:05:26 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:38.871 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:38.872 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:38.873 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:38.874 12:05:26 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:38.874 12:05:26 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:38.874 12:05:26 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:38.874 12:05:26 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:38.874 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.875 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:38.876 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:38.877 12:05:26 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:38.877 12:05:26 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:38.877 12:05:26 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:38.877 12:05:26 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.877 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:38.878 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:38.879 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:38.880 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.881 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:38.882 12:05:26 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:38.882 12:05:26 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:38.882 12:05:26 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:38.882 12:05:26 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:38.882 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.883 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:38.884 12:05:26 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:11:38.884 12:05:26 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:11:38.884 12:05:26 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:38.884 12:05:26 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:38.884 12:05:26 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:39.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:40.385 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.385 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.385 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.385 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.644 12:05:28 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:40.644 12:05:28 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:40.644 12:05:28 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.644 12:05:28 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:40.644 ************************************ 00:11:40.644 START TEST nvme_simple_copy 00:11:40.644 ************************************ 00:11:40.644 12:05:28 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:40.903 Initializing NVMe Controllers 00:11:40.903 Attaching to 0000:00:10.0 00:11:40.903 Controller supports SCC. Attached to 0000:00:10.0 00:11:40.903 Namespace ID: 1 size: 6GB 00:11:40.903 Initialization complete. 00:11:40.903 00:11:40.903 Controller QEMU NVMe Ctrl (12340 ) 00:11:40.903 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:40.903 Namespace Block Size:4096 00:11:40.903 Writing LBAs 0 to 63 with Random Data 00:11:40.903 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:40.903 LBAs matching Written Data: 64 00:11:40.903 00:11:40.903 real 0m0.309s 00:11:40.903 user 0m0.111s 00:11:40.903 sys 0m0.095s 00:11:40.903 12:05:28 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.903 12:05:28 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:40.903 ************************************ 00:11:40.903 END TEST nvme_simple_copy 00:11:40.903 ************************************ 00:11:40.903 00:11:40.903 real 0m8.841s 00:11:40.903 user 0m1.511s 00:11:40.903 sys 0m2.409s 00:11:40.903 12:05:28 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.903 12:05:28 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:40.903 ************************************ 00:11:40.903 END TEST nvme_scc 00:11:40.903 ************************************ 00:11:40.903 12:05:28 -- spdk/autotest.sh@227 -- # [[ 0 -eq 1 ]] 00:11:40.903 12:05:28 -- spdk/autotest.sh@230 -- # [[ 0 -eq 1 ]] 00:11:40.903 12:05:28 -- spdk/autotest.sh@233 -- # [[ '' -eq 1 ]] 00:11:40.903 12:05:28 -- spdk/autotest.sh@236 -- # [[ 1 -eq 1 ]] 00:11:40.903 12:05:28 -- spdk/autotest.sh@237 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:40.903 12:05:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:40.903 12:05:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.903 12:05:28 -- common/autotest_common.sh@10 -- # set +x 00:11:40.903 ************************************ 00:11:40.903 START TEST nvme_fdp 00:11:40.903 ************************************ 00:11:40.903 12:05:28 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:11:41.162 * Looking for test storage... 00:11:41.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:41.162 12:05:28 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:41.162 12:05:28 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:41.162 12:05:29 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:41.162 12:05:29 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:41.162 12:05:29 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:41.162 12:05:29 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.162 12:05:29 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.162 12:05:29 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.162 12:05:29 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.162 12:05:29 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.162 12:05:29 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.162 12:05:29 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:41.162 12:05:29 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.162 12:05:29 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:41.162 12:05:29 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:41.162 12:05:29 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:41.162 12:05:29 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:41.162 12:05:29 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:41.162 12:05:29 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:41.162 12:05:29 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:41.162 12:05:29 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:41.162 12:05:29 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:41.162 12:05:29 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:41.162 12:05:29 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:41.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:41.987 Waiting for block devices as requested 00:11:41.987 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:42.245 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:42.245 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:42.503 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.815 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:47.815 12:05:35 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:47.815 12:05:35 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:47.815 12:05:35 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:47.816 12:05:35 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:47.816 12:05:35 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:47.816 12:05:35 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:47.816 12:05:35 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:47.816 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.817 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:47.818 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.819 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:47.820 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:47.821 12:05:35 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:47.821 12:05:35 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:47.821 12:05:35 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:47.821 12:05:35 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:47.821 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:47.822 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.823 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.824 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.825 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:47.826 12:05:35 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:47.827 12:05:35 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:47.827 12:05:35 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:47.827 12:05:35 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:47.827 12:05:35 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:47.827 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.828 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.829 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.830 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:47.831 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:47.832 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.833 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.834 12:05:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.835 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:47.836 12:05:35 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:47.836 12:05:35 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:47.836 12:05:35 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:47.836 12:05:35 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:47.836 12:05:35 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.837 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.838 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.839 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:47.840 12:05:35 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:47.840 12:05:35 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:11:47.841 12:05:35 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:11:47.841 12:05:35 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:47.841 12:05:35 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:47.841 12:05:35 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:48.777 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:49.343 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:49.343 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:49.343 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:49.343 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:49.601 12:05:37 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:49.601 12:05:37 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:49.601 12:05:37 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.601 12:05:37 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:49.601 ************************************ 00:11:49.601 START TEST nvme_flexible_data_placement 00:11:49.601 ************************************ 00:11:49.601 12:05:37 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:49.860 Initializing NVMe Controllers 00:11:49.860 Attaching to 0000:00:13.0 00:11:49.860 Controller supports FDP Attached to 0000:00:13.0 00:11:49.860 Namespace ID: 1 Endurance Group ID: 1 00:11:49.860 Initialization complete. 00:11:49.860 00:11:49.860 ================================== 00:11:49.860 == FDP tests for Namespace: #01 == 00:11:49.860 ================================== 00:11:49.860 00:11:49.860 Get Feature: FDP: 00:11:49.860 ================= 00:11:49.860 Enabled: Yes 00:11:49.860 FDP configuration Index: 0 00:11:49.860 00:11:49.860 FDP configurations log page 00:11:49.860 =========================== 00:11:49.860 Number of FDP configurations: 1 00:11:49.860 Version: 0 00:11:49.860 Size: 112 00:11:49.860 FDP Configuration Descriptor: 0 00:11:49.860 Descriptor Size: 96 00:11:49.860 Reclaim Group Identifier format: 2 00:11:49.860 FDP Volatile Write Cache: Not Present 00:11:49.860 FDP Configuration: Valid 00:11:49.860 Vendor Specific Size: 0 00:11:49.860 Number of Reclaim Groups: 2 00:11:49.860 Number of Recalim Unit Handles: 8 00:11:49.860 Max Placement Identifiers: 128 00:11:49.860 Number of Namespaces Suppprted: 256 00:11:49.860 Reclaim unit Nominal Size: 6000000 bytes 00:11:49.860 Estimated Reclaim Unit Time Limit: Not Reported 00:11:49.860 RUH Desc #000: RUH Type: Initially Isolated 00:11:49.860 RUH Desc #001: RUH Type: Initially Isolated 00:11:49.860 RUH Desc #002: RUH Type: Initially Isolated 00:11:49.860 RUH Desc #003: RUH Type: Initially Isolated 00:11:49.860 RUH Desc #004: RUH Type: Initially Isolated 00:11:49.860 RUH Desc #005: RUH Type: Initially Isolated 00:11:49.860 RUH Desc #006: RUH Type: Initially Isolated 00:11:49.860 RUH Desc #007: RUH Type: Initially Isolated 00:11:49.860 00:11:49.860 FDP reclaim unit handle usage log page 00:11:49.860 ====================================== 00:11:49.860 Number of Reclaim Unit Handles: 8 00:11:49.860 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:49.860 RUH Usage Desc #001: RUH Attributes: Unused 00:11:49.860 RUH Usage Desc #002: RUH Attributes: Unused 00:11:49.860 RUH Usage Desc #003: RUH Attributes: Unused 00:11:49.860 RUH Usage Desc #004: RUH Attributes: Unused 00:11:49.860 RUH Usage Desc #005: RUH Attributes: Unused 00:11:49.860 RUH Usage Desc #006: RUH Attributes: Unused 00:11:49.860 RUH Usage Desc #007: RUH Attributes: Unused 00:11:49.860 00:11:49.860 FDP statistics log page 00:11:49.860 ======================= 00:11:49.860 Host bytes with metadata written: 891826176 00:11:49.860 Media bytes with metadata written: 891985920 00:11:49.860 Media bytes erased: 0 00:11:49.860 00:11:49.860 FDP Reclaim unit handle status 00:11:49.860 ============================== 00:11:49.860 Number of RUHS descriptors: 2 00:11:49.860 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000d7d 00:11:49.860 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:49.860 00:11:49.860 FDP write on placement id: 0 success 00:11:49.860 00:11:49.860 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:49.860 00:11:49.860 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:49.860 00:11:49.860 Get Feature: FDP Events for Placement handle: #0 00:11:49.860 ======================== 00:11:49.860 Number of FDP Events: 6 00:11:49.860 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:49.860 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:49.860 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:49.860 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:49.860 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:49.860 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:49.860 00:11:49.860 FDP events log page 00:11:49.860 =================== 00:11:49.860 Number of FDP events: 1 00:11:49.860 FDP Event #0: 00:11:49.860 Event Type: RU Not Written to Capacity 00:11:49.860 Placement Identifier: Valid 00:11:49.860 NSID: Valid 00:11:49.860 Location: Valid 00:11:49.860 Placement Identifier: 0 00:11:49.860 Event Timestamp: 8 00:11:49.860 Namespace Identifier: 1 00:11:49.860 Reclaim Group Identifier: 0 00:11:49.860 Reclaim Unit Handle Identifier: 0 00:11:49.860 00:11:49.860 FDP test passed 00:11:49.860 00:11:49.860 real 0m0.293s 00:11:49.860 user 0m0.101s 00:11:49.860 sys 0m0.093s 00:11:49.860 ************************************ 00:11:49.860 END TEST nvme_flexible_data_placement 00:11:49.860 ************************************ 00:11:49.860 12:05:37 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.860 12:05:37 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:49.860 ************************************ 00:11:49.860 END TEST nvme_fdp 00:11:49.860 ************************************ 00:11:49.860 00:11:49.860 real 0m8.880s 00:11:49.860 user 0m1.468s 00:11:49.860 sys 0m2.424s 00:11:49.860 12:05:37 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.860 12:05:37 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:49.860 12:05:37 -- spdk/autotest.sh@240 -- # [[ '' -eq 1 ]] 00:11:49.860 12:05:37 -- spdk/autotest.sh@244 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:49.860 12:05:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:49.860 12:05:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.860 12:05:37 -- common/autotest_common.sh@10 -- # set +x 00:11:49.860 ************************************ 00:11:49.860 START TEST nvme_rpc 00:11:49.860 ************************************ 00:11:49.860 12:05:37 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:50.119 * Looking for test storage... 00:11:50.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:50.119 12:05:37 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:50.119 12:05:37 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:50.119 12:05:37 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:11:50.119 12:05:37 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:11:50.119 12:05:37 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:11:50.119 12:05:37 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:11:50.119 12:05:37 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:11:50.119 12:05:37 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:11:50.119 12:05:37 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:50.119 12:05:37 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:50.119 12:05:37 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:11:50.119 12:05:38 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:11:50.119 12:05:38 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:50.119 12:05:38 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:11:50.119 12:05:38 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:50.119 12:05:38 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=71210 00:11:50.119 12:05:38 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:50.119 12:05:38 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:50.119 12:05:38 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 71210 00:11:50.119 12:05:38 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 71210 ']' 00:11:50.119 12:05:38 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.119 12:05:38 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:50.119 12:05:38 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.119 12:05:38 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:50.119 12:05:38 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.399 [2024-07-26 12:05:38.153855] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:11:50.399 [2024-07-26 12:05:38.154170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71210 ] 00:11:50.399 [2024-07-26 12:05:38.311480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:50.658 [2024-07-26 12:05:38.543521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.658 [2024-07-26 12:05:38.543554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.592 12:05:39 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:51.592 12:05:39 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:51.592 12:05:39 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:51.851 Nvme0n1 00:11:51.851 12:05:39 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:51.851 12:05:39 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:52.108 request: 00:11:52.108 { 00:11:52.108 "bdev_name": "Nvme0n1", 00:11:52.108 "filename": "non_existing_file", 00:11:52.108 "method": "bdev_nvme_apply_firmware", 00:11:52.108 "req_id": 1 00:11:52.108 } 00:11:52.108 Got JSON-RPC error response 00:11:52.108 response: 00:11:52.108 { 00:11:52.108 "code": -32603, 00:11:52.108 "message": "open file failed." 00:11:52.108 } 00:11:52.108 12:05:39 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:52.108 12:05:39 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:52.108 12:05:39 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:52.108 12:05:40 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:52.108 12:05:40 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 71210 00:11:52.108 12:05:40 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 71210 ']' 00:11:52.108 12:05:40 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 71210 00:11:52.108 12:05:40 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:11:52.108 12:05:40 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:52.108 12:05:40 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71210 00:11:52.365 killing process with pid 71210 00:11:52.365 12:05:40 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:52.365 12:05:40 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:52.365 12:05:40 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71210' 00:11:52.365 12:05:40 nvme_rpc -- common/autotest_common.sh@969 -- # kill 71210 00:11:52.365 12:05:40 nvme_rpc -- common/autotest_common.sh@974 -- # wait 71210 00:11:54.897 ************************************ 00:11:54.897 END TEST nvme_rpc 00:11:54.897 ************************************ 00:11:54.897 00:11:54.897 real 0m4.588s 00:11:54.897 user 0m8.157s 00:11:54.897 sys 0m0.732s 00:11:54.897 12:05:42 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.897 12:05:42 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.897 12:05:42 -- spdk/autotest.sh@245 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:54.897 12:05:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:54.897 12:05:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.897 12:05:42 -- common/autotest_common.sh@10 -- # set +x 00:11:54.897 ************************************ 00:11:54.897 START TEST nvme_rpc_timeouts 00:11:54.897 ************************************ 00:11:54.897 12:05:42 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:54.897 * Looking for test storage... 00:11:54.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:54.897 12:05:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:54.897 12:05:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_71293 00:11:54.897 12:05:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_71293 00:11:54.897 12:05:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=71317 00:11:54.897 12:05:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:54.897 12:05:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:54.897 12:05:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 71317 00:11:54.897 12:05:42 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 71317 ']' 00:11:54.897 12:05:42 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.897 12:05:42 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:54.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.897 12:05:42 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.897 12:05:42 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:54.897 12:05:42 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:54.897 [2024-07-26 12:05:42.703375] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:11:54.897 [2024-07-26 12:05:42.703500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71317 ] 00:11:54.897 [2024-07-26 12:05:42.872486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:55.156 [2024-07-26 12:05:43.099352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.156 [2024-07-26 12:05:43.099387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.106 Checking default timeout settings: 00:11:56.106 12:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:56.106 12:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:11:56.106 12:05:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:56.106 12:05:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:56.364 Making settings changes with rpc: 00:11:56.364 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:56.364 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:56.624 Check default vs. modified settings: 00:11:56.624 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:56.624 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_71293 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_71293 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:56.896 Setting action_on_timeout is changed as expected. 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_71293 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_71293 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:56.896 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:57.163 Setting timeout_us is changed as expected. 00:11:57.163 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:57.163 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:57.163 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:57.163 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:57.163 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_71293 00:11:57.163 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:57.163 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:57.163 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:57.163 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_71293 00:11:57.163 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:57.164 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:57.164 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:57.164 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:57.164 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:57.164 Setting timeout_admin_us is changed as expected. 00:11:57.164 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:57.164 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_71293 /tmp/settings_modified_71293 00:11:57.164 12:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 71317 00:11:57.164 12:05:44 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 71317 ']' 00:11:57.164 12:05:44 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 71317 00:11:57.164 12:05:44 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:11:57.164 12:05:44 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:57.164 12:05:44 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71317 00:11:57.164 killing process with pid 71317 00:11:57.164 12:05:44 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:57.164 12:05:44 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:57.164 12:05:44 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71317' 00:11:57.164 12:05:44 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 71317 00:11:57.164 12:05:44 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 71317 00:11:59.696 RPC TIMEOUT SETTING TEST PASSED. 00:11:59.696 12:05:47 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:59.696 ************************************ 00:11:59.696 END TEST nvme_rpc_timeouts 00:11:59.696 ************************************ 00:11:59.696 00:11:59.696 real 0m4.913s 00:11:59.696 user 0m9.032s 00:11:59.696 sys 0m0.756s 00:11:59.696 12:05:47 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.696 12:05:47 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:59.696 12:05:47 -- spdk/autotest.sh@247 -- # uname -s 00:11:59.696 12:05:47 -- spdk/autotest.sh@247 -- # '[' Linux = Linux ']' 00:11:59.696 12:05:47 -- spdk/autotest.sh@248 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:59.696 12:05:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:59.696 12:05:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.696 12:05:47 -- common/autotest_common.sh@10 -- # set +x 00:11:59.696 ************************************ 00:11:59.696 START TEST sw_hotplug 00:11:59.696 ************************************ 00:11:59.696 12:05:47 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:59.696 * Looking for test storage... 00:11:59.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:59.696 12:05:47 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:00.264 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:00.523 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:00.523 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:00.523 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:00.523 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:00.523 12:05:48 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:12:00.523 12:05:48 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:12:00.523 12:05:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:12:00.523 12:05:48 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@230 -- # local class 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@15 -- # local i 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:00.523 12:05:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:00.524 12:05:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:00.524 12:05:48 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:12:00.524 12:05:48 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:12:00.524 12:05:48 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:12:00.782 12:05:48 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:12:00.782 12:05:48 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:12:00.782 12:05:48 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:12:00.782 12:05:48 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:00.782 12:05:48 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:12:00.782 12:05:48 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:12:00.782 12:05:48 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:01.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:01.299 Waiting for block devices as requested 00:12:01.558 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:01.558 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:01.817 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:01.817 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:07.097 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:07.097 12:05:54 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:12:07.097 12:05:54 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:07.360 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:12:07.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:07.623 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:12:07.881 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:12:08.139 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:08.139 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:08.397 12:05:56 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:12:08.397 12:05:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.397 12:05:56 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:12:08.397 12:05:56 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:12:08.397 12:05:56 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=72195 00:12:08.397 12:05:56 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:12:08.397 12:05:56 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:12:08.397 12:05:56 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:08.397 12:05:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:12:08.397 12:05:56 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:08.397 12:05:56 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:08.397 12:05:56 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:08.397 12:05:56 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:08.397 12:05:56 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:12:08.397 12:05:56 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:08.397 12:05:56 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:08.397 12:05:56 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:12:08.398 12:05:56 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:08.398 12:05:56 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:08.656 Initializing NVMe Controllers 00:12:08.656 Attaching to 0000:00:10.0 00:12:08.656 Attaching to 0000:00:11.0 00:12:08.656 Attached to 0000:00:11.0 00:12:08.656 Attached to 0000:00:10.0 00:12:08.656 Initialization complete. Starting I/O... 00:12:08.656 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:12:08.656 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:08.656 00:12:09.592 QEMU NVMe Ctrl (12341 ): 1576 I/Os completed (+1576) 00:12:09.592 QEMU NVMe Ctrl (12340 ): 1576 I/Os completed (+1576) 00:12:09.592 00:12:10.968 QEMU NVMe Ctrl (12341 ): 3660 I/Os completed (+2084) 00:12:10.968 QEMU NVMe Ctrl (12340 ): 3660 I/Os completed (+2084) 00:12:10.968 00:12:11.903 QEMU NVMe Ctrl (12341 ): 5860 I/Os completed (+2200) 00:12:11.903 QEMU NVMe Ctrl (12340 ): 5860 I/Os completed (+2200) 00:12:11.903 00:12:12.839 QEMU NVMe Ctrl (12341 ): 7976 I/Os completed (+2116) 00:12:12.839 QEMU NVMe Ctrl (12340 ): 7983 I/Os completed (+2123) 00:12:12.839 00:12:13.775 QEMU NVMe Ctrl (12341 ): 10208 I/Os completed (+2232) 00:12:13.775 QEMU NVMe Ctrl (12340 ): 10215 I/Os completed (+2232) 00:12:13.775 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.711 [2024-07-26 12:06:02.339074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:14.711 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:14.711 [2024-07-26 12:06:02.340969] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 [2024-07-26 12:06:02.341063] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 [2024-07-26 12:06:02.341091] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 [2024-07-26 12:06:02.341130] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:14.711 [2024-07-26 12:06:02.343935] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 [2024-07-26 12:06:02.343985] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 [2024-07-26 12:06:02.344002] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 [2024-07-26 12:06:02.344021] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.711 [2024-07-26 12:06:02.380936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:14.711 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:14.711 [2024-07-26 12:06:02.382691] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 [2024-07-26 12:06:02.382847] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 [2024-07-26 12:06:02.382907] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 [2024-07-26 12:06:02.383016] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:14.711 [2024-07-26 12:06:02.385776] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 [2024-07-26 12:06:02.385922] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 [2024-07-26 12:06:02.385975] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 [2024-07-26 12:06:02.386088] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:14.711 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:14.711 Attaching to 0000:00:10.0 00:12:14.711 Attached to 0000:00:10.0 00:12:14.711 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:14.971 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:14.971 12:06:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:14.971 Attaching to 0000:00:11.0 00:12:14.971 Attached to 0000:00:11.0 00:12:15.914 QEMU NVMe Ctrl (12340 ): 2084 I/Os completed (+2084) 00:12:15.914 QEMU NVMe Ctrl (12341 ): 1848 I/Os completed (+1848) 00:12:15.914 00:12:16.847 QEMU NVMe Ctrl (12340 ): 4108 I/Os completed (+2024) 00:12:16.847 QEMU NVMe Ctrl (12341 ): 3872 I/Os completed (+2024) 00:12:16.847 00:12:17.784 QEMU NVMe Ctrl (12340 ): 6202 I/Os completed (+2094) 00:12:17.784 QEMU NVMe Ctrl (12341 ): 5964 I/Os completed (+2092) 00:12:17.784 00:12:18.718 QEMU NVMe Ctrl (12340 ): 8258 I/Os completed (+2056) 00:12:18.718 QEMU NVMe Ctrl (12341 ): 8023 I/Os completed (+2059) 00:12:18.718 00:12:19.653 QEMU NVMe Ctrl (12340 ): 10410 I/Os completed (+2152) 00:12:19.653 QEMU NVMe Ctrl (12341 ): 10175 I/Os completed (+2152) 00:12:19.653 00:12:20.587 QEMU NVMe Ctrl (12340 ): 12518 I/Os completed (+2108) 00:12:20.587 QEMU NVMe Ctrl (12341 ): 12283 I/Os completed (+2108) 00:12:20.587 00:12:21.962 QEMU NVMe Ctrl (12340 ): 14618 I/Os completed (+2100) 00:12:21.962 QEMU NVMe Ctrl (12341 ): 14386 I/Os completed (+2103) 00:12:21.962 00:12:22.933 QEMU NVMe Ctrl (12340 ): 16734 I/Os completed (+2116) 00:12:22.933 QEMU NVMe Ctrl (12341 ): 16502 I/Os completed (+2116) 00:12:22.933 00:12:23.885 QEMU NVMe Ctrl (12340 ): 18870 I/Os completed (+2136) 00:12:23.885 QEMU NVMe Ctrl (12341 ): 18638 I/Os completed (+2136) 00:12:23.885 00:12:24.858 QEMU NVMe Ctrl (12340 ): 20974 I/Os completed (+2104) 00:12:24.858 QEMU NVMe Ctrl (12341 ): 20742 I/Os completed (+2104) 00:12:24.858 00:12:25.804 QEMU NVMe Ctrl (12340 ): 23066 I/Os completed (+2092) 00:12:25.804 QEMU NVMe Ctrl (12341 ): 22834 I/Os completed (+2092) 00:12:25.804 00:12:26.741 QEMU NVMe Ctrl (12340 ): 25118 I/Os completed (+2052) 00:12:26.741 QEMU NVMe Ctrl (12341 ): 24886 I/Os completed (+2052) 00:12:26.741 00:12:26.741 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:26.741 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:26.741 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:26.741 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:26.741 [2024-07-26 12:06:14.717770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:26.741 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:26.741 [2024-07-26 12:06:14.719906] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.998 [2024-07-26 12:06:14.720044] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.999 [2024-07-26 12:06:14.720157] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.999 [2024-07-26 12:06:14.720312] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.999 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:26.999 [2024-07-26 12:06:14.724558] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.999 [2024-07-26 12:06:14.724755] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.999 [2024-07-26 12:06:14.724802] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.999 [2024-07-26 12:06:14.724836] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.999 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:26.999 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:26.999 [2024-07-26 12:06:14.755042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:26.999 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:26.999 [2024-07-26 12:06:14.756997] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.999 [2024-07-26 12:06:14.757059] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.999 [2024-07-26 12:06:14.757102] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.999 [2024-07-26 12:06:14.757176] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.999 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:26.999 [2024-07-26 12:06:14.760662] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.999 [2024-07-26 12:06:14.760726] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.999 [2024-07-26 12:06:14.760754] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.999 [2024-07-26 12:06:14.760788] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.999 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:26.999 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:26.999 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:26.999 EAL: Scan for (pci) bus failed. 00:12:26.999 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:26.999 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:26.999 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:26.999 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:26.999 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:26.999 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:26.999 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:26.999 12:06:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:27.257 Attaching to 0000:00:10.0 00:12:27.257 Attached to 0000:00:10.0 00:12:27.257 12:06:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:27.257 12:06:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:27.257 12:06:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:27.257 Attaching to 0000:00:11.0 00:12:27.257 Attached to 0000:00:11.0 00:12:27.824 QEMU NVMe Ctrl (12340 ): 1212 I/Os completed (+1212) 00:12:27.824 QEMU NVMe Ctrl (12341 ): 1004 I/Os completed (+1004) 00:12:27.824 00:12:28.760 QEMU NVMe Ctrl (12340 ): 3456 I/Os completed (+2244) 00:12:28.760 QEMU NVMe Ctrl (12341 ): 3248 I/Os completed (+2244) 00:12:28.760 00:12:29.694 QEMU NVMe Ctrl (12340 ): 5720 I/Os completed (+2264) 00:12:29.694 QEMU NVMe Ctrl (12341 ): 5512 I/Os completed (+2264) 00:12:29.694 00:12:30.630 QEMU NVMe Ctrl (12340 ): 7948 I/Os completed (+2228) 00:12:30.630 QEMU NVMe Ctrl (12341 ): 7741 I/Os completed (+2229) 00:12:30.630 00:12:31.611 QEMU NVMe Ctrl (12340 ): 10136 I/Os completed (+2188) 00:12:31.611 QEMU NVMe Ctrl (12341 ): 9929 I/Os completed (+2188) 00:12:31.611 00:12:32.546 QEMU NVMe Ctrl (12340 ): 12316 I/Os completed (+2180) 00:12:32.546 QEMU NVMe Ctrl (12341 ): 12105 I/Os completed (+2176) 00:12:32.546 00:12:33.922 QEMU NVMe Ctrl (12340 ): 14437 I/Os completed (+2121) 00:12:33.922 QEMU NVMe Ctrl (12341 ): 14223 I/Os completed (+2118) 00:12:33.922 00:12:34.546 QEMU NVMe Ctrl (12340 ): 16569 I/Os completed (+2132) 00:12:34.546 QEMU NVMe Ctrl (12341 ): 16359 I/Os completed (+2136) 00:12:34.546 00:12:35.919 QEMU NVMe Ctrl (12340 ): 18769 I/Os completed (+2200) 00:12:35.919 QEMU NVMe Ctrl (12341 ): 18559 I/Os completed (+2200) 00:12:35.919 00:12:36.855 QEMU NVMe Ctrl (12340 ): 20893 I/Os completed (+2124) 00:12:36.855 QEMU NVMe Ctrl (12341 ): 20688 I/Os completed (+2129) 00:12:36.855 00:12:37.801 QEMU NVMe Ctrl (12340 ): 23085 I/Os completed (+2192) 00:12:37.801 QEMU NVMe Ctrl (12341 ): 22876 I/Os completed (+2188) 00:12:37.801 00:12:38.738 QEMU NVMe Ctrl (12340 ): 25345 I/Os completed (+2260) 00:12:38.738 QEMU NVMe Ctrl (12341 ): 25136 I/Os completed (+2260) 00:12:38.738 00:12:39.306 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:39.306 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:39.306 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:39.306 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:39.306 [2024-07-26 12:06:27.088353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:39.306 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:39.306 [2024-07-26 12:06:27.090019] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 [2024-07-26 12:06:27.090077] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 [2024-07-26 12:06:27.090098] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 [2024-07-26 12:06:27.090131] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:39.306 [2024-07-26 12:06:27.092945] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 [2024-07-26 12:06:27.092993] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 [2024-07-26 12:06:27.093012] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 [2024-07-26 12:06:27.093034] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:39.306 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:39.306 [2024-07-26 12:06:27.127925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:39.306 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:39.306 [2024-07-26 12:06:27.129576] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 [2024-07-26 12:06:27.129636] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 [2024-07-26 12:06:27.129680] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 [2024-07-26 12:06:27.129704] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:39.306 [2024-07-26 12:06:27.132281] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 [2024-07-26 12:06:27.132324] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 [2024-07-26 12:06:27.132347] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 [2024-07-26 12:06:27.132363] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:39.306 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:39.306 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:39.306 EAL: Scan for (pci) bus failed. 00:12:39.306 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:39.306 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:39.306 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:39.306 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:39.566 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:39.566 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:39.566 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:39.566 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:39.566 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:39.566 Attaching to 0000:00:10.0 00:12:39.566 Attached to 0000:00:10.0 00:12:39.566 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:39.566 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:39.566 12:06:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:39.566 Attaching to 0000:00:11.0 00:12:39.566 Attached to 0000:00:11.0 00:12:39.566 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:39.566 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:39.566 [2024-07-26 12:06:27.458041] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:51.776 12:06:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:51.776 12:06:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:51.776 12:06:39 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.12 00:12:51.776 12:06:39 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.12 00:12:51.776 12:06:39 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:51.776 12:06:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.12 00:12:51.777 12:06:39 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.12 2 00:12:51.777 remove_attach_helper took 43.12s to complete (handling 2 nvme drive(s)) 12:06:39 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:58.347 12:06:45 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 72195 00:12:58.347 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (72195) - No such process 00:12:58.347 12:06:45 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 72195 00:12:58.347 12:06:45 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:58.347 12:06:45 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:58.347 12:06:45 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:58.347 12:06:45 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=72738 00:12:58.347 12:06:45 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:58.347 12:06:45 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:58.347 12:06:45 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 72738 00:12:58.347 12:06:45 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 72738 ']' 00:12:58.347 12:06:45 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.347 12:06:45 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:58.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.347 12:06:45 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.347 12:06:45 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:58.347 12:06:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:58.347 [2024-07-26 12:06:45.566699] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:12:58.347 [2024-07-26 12:06:45.566826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72738 ] 00:12:58.347 [2024-07-26 12:06:45.738326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.347 [2024-07-26 12:06:45.971947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.280 12:06:46 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:59.280 12:06:46 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:12:59.280 12:06:46 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:59.280 12:06:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.280 12:06:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:59.280 12:06:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.280 12:06:46 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:59.280 12:06:46 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:59.280 12:06:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:59.280 12:06:46 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:59.280 12:06:46 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:59.280 12:06:46 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:59.280 12:06:46 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:59.280 12:06:46 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:59.280 12:06:46 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:59.280 12:06:46 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:59.280 12:06:46 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:59.280 12:06:46 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:59.280 12:06:46 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:05.864 12:06:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:05.864 12:06:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:05.864 12:06:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:05.864 12:06:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:05.864 12:06:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:05.864 12:06:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:05.864 12:06:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:05.864 12:06:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:05.864 12:06:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:05.864 12:06:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:05.864 12:06:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:05.864 12:06:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.864 12:06:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:05.864 [2024-07-26 12:06:52.994630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:05.864 [2024-07-26 12:06:52.996978] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.864 [2024-07-26 12:06:52.997025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.864 [2024-07-26 12:06:52.997057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.864 [2024-07-26 12:06:52.997081] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.864 [2024-07-26 12:06:52.997097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.864 [2024-07-26 12:06:52.997110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.864 [2024-07-26 12:06:52.997138] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.864 [2024-07-26 12:06:52.997150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.864 [2024-07-26 12:06:52.997164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.864 [2024-07-26 12:06:52.997177] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.864 [2024-07-26 12:06:52.997192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.864 [2024-07-26 12:06:52.997204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.864 12:06:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:05.864 [2024-07-26 12:06:53.394008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:05.864 [2024-07-26 12:06:53.396480] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.864 [2024-07-26 12:06:53.396524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.864 [2024-07-26 12:06:53.396541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.864 [2024-07-26 12:06:53.396566] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.864 [2024-07-26 12:06:53.396578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.864 [2024-07-26 12:06:53.396593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.864 [2024-07-26 12:06:53.396606] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.864 [2024-07-26 12:06:53.396619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.864 [2024-07-26 12:06:53.396631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.864 [2024-07-26 12:06:53.396646] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:05.864 [2024-07-26 12:06:53.396656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.864 [2024-07-26 12:06:53.396670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:05.864 12:06:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.864 12:06:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:05.864 12:06:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:05.864 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:06.127 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:06.127 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:06.127 12:06:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:18.332 12:07:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:18.332 12:07:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:18.332 12:07:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:18.332 12:07:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:18.332 12:07:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:18.332 12:07:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:18.332 12:07:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.332 12:07:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:18.332 12:07:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.332 12:07:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:18.332 12:07:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:18.332 12:07:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:18.332 12:07:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:18.332 [2024-07-26 12:07:05.973789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:18.332 [2024-07-26 12:07:05.976572] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:18.332 [2024-07-26 12:07:05.976613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:18.332 [2024-07-26 12:07:05.976632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:18.332 [2024-07-26 12:07:05.976654] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:18.333 [2024-07-26 12:07:05.976668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:18.333 [2024-07-26 12:07:05.976680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:18.333 [2024-07-26 12:07:05.976696] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:18.333 [2024-07-26 12:07:05.976706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:18.333 [2024-07-26 12:07:05.976720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:18.333 [2024-07-26 12:07:05.976733] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:18.333 [2024-07-26 12:07:05.976747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:18.333 [2024-07-26 12:07:05.976758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:18.333 12:07:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:18.333 12:07:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:18.333 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:18.333 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:18.333 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:18.333 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:18.333 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:18.333 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:18.333 12:07:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.333 12:07:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:18.333 12:07:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.333 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:18.333 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:18.591 [2024-07-26 12:07:06.373158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:18.591 [2024-07-26 12:07:06.375468] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:18.591 [2024-07-26 12:07:06.375510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:18.591 [2024-07-26 12:07:06.375527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:18.591 [2024-07-26 12:07:06.375554] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:18.592 [2024-07-26 12:07:06.375566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:18.592 [2024-07-26 12:07:06.375670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:18.592 [2024-07-26 12:07:06.375683] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:18.592 [2024-07-26 12:07:06.375696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:18.592 [2024-07-26 12:07:06.375708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:18.592 [2024-07-26 12:07:06.375723] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:18.592 [2024-07-26 12:07:06.375734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:18.592 [2024-07-26 12:07:06.375748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:18.592 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:18.592 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:18.851 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:18.851 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:18.851 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:18.851 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:18.851 12:07:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.851 12:07:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:18.851 12:07:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.851 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:18.851 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:18.851 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:18.851 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:18.851 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:19.109 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:19.109 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:19.109 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:19.109 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:19.109 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:19.109 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:19.109 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:19.109 12:07:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:31.354 12:07:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:31.354 12:07:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:31.354 12:07:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:31.354 12:07:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:31.354 12:07:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:31.354 12:07:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:31.354 12:07:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.354 12:07:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:31.354 12:07:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.354 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:31.354 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:31.354 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:31.354 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:31.354 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:31.354 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:31.354 [2024-07-26 12:07:19.052764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:31.354 [2024-07-26 12:07:19.055943] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:31.354 [2024-07-26 12:07:19.055984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:31.354 [2024-07-26 12:07:19.056008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.354 [2024-07-26 12:07:19.056031] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:31.354 [2024-07-26 12:07:19.056049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:31.354 [2024-07-26 12:07:19.056062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.354 [2024-07-26 12:07:19.056085] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:31.354 [2024-07-26 12:07:19.056097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:31.354 [2024-07-26 12:07:19.056114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.354 [2024-07-26 12:07:19.056139] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:31.354 [2024-07-26 12:07:19.056157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:31.354 [2024-07-26 12:07:19.056169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.354 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:31.354 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:31.354 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:31.354 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:31.354 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:31.354 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:31.354 12:07:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.354 12:07:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:31.354 12:07:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.354 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:31.354 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:31.613 [2024-07-26 12:07:19.452160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:31.613 [2024-07-26 12:07:19.455042] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:31.613 [2024-07-26 12:07:19.455090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:31.613 [2024-07-26 12:07:19.455107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.613 [2024-07-26 12:07:19.455146] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:31.613 [2024-07-26 12:07:19.455158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:31.613 [2024-07-26 12:07:19.455176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.613 [2024-07-26 12:07:19.455189] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:31.613 [2024-07-26 12:07:19.455205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:31.613 [2024-07-26 12:07:19.455217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.613 [2024-07-26 12:07:19.455240] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:31.613 [2024-07-26 12:07:19.455251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:31.613 [2024-07-26 12:07:19.455268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.873 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:31.873 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:31.873 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:31.873 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:31.873 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:31.873 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:31.873 12:07:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.873 12:07:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:31.873 12:07:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.873 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:31.873 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:31.873 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:31.873 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:31.873 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:32.132 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:32.132 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:32.132 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:32.132 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:32.132 12:07:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:32.132 12:07:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:32.132 12:07:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:32.132 12:07:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:44.392 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:44.392 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:44.392 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:44.392 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:44.392 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:44.392 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:44.392 12:07:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.392 12:07:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:44.392 12:07:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.392 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:44.392 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:44.392 12:07:32 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.19 00:13:44.392 12:07:32 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.19 00:13:44.392 12:07:32 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:44.392 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.19 00:13:44.392 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.19 2 00:13:44.392 remove_attach_helper took 45.19s to complete (handling 2 nvme drive(s)) 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:44.392 12:07:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.392 12:07:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:44.392 12:07:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.392 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:44.392 12:07:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.392 12:07:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:44.393 12:07:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.393 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:44.393 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:44.393 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:44.393 12:07:32 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:13:44.393 12:07:32 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:13:44.393 12:07:32 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:13:44.393 12:07:32 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:13:44.393 12:07:32 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:13:44.393 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:44.393 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:44.393 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:44.393 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:44.393 12:07:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:50.961 12:07:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.961 12:07:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:50.961 [2024-07-26 12:07:38.221157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:13:50.961 [2024-07-26 12:07:38.224038] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.961 [2024-07-26 12:07:38.224090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.961 [2024-07-26 12:07:38.224132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.961 [2024-07-26 12:07:38.224165] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.961 [2024-07-26 12:07:38.224189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.961 [2024-07-26 12:07:38.224206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.961 [2024-07-26 12:07:38.224229] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.961 [2024-07-26 12:07:38.224244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.961 [2024-07-26 12:07:38.224264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.961 [2024-07-26 12:07:38.224281] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.961 [2024-07-26 12:07:38.224300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.961 [2024-07-26 12:07:38.224316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.961 12:07:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:50.961 [2024-07-26 12:07:38.620509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:13:50.961 [2024-07-26 12:07:38.622873] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.961 [2024-07-26 12:07:38.622921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.961 [2024-07-26 12:07:38.622939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.961 [2024-07-26 12:07:38.622965] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.961 [2024-07-26 12:07:38.622978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.961 [2024-07-26 12:07:38.622993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.961 [2024-07-26 12:07:38.623007] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.961 [2024-07-26 12:07:38.623023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.961 [2024-07-26 12:07:38.623035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.961 [2024-07-26 12:07:38.623051] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:50.961 [2024-07-26 12:07:38.623062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:50.961 [2024-07-26 12:07:38.623077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:50.961 12:07:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.961 12:07:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:50.961 12:07:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:50.961 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:51.221 12:07:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:51.221 12:07:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:51.221 12:07:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:51.221 12:07:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:51.221 12:07:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:51.221 12:07:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:51.221 12:07:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:51.221 12:07:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:03.447 12:07:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.447 12:07:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:03.447 12:07:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:03.447 12:07:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.447 12:07:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:03.447 [2024-07-26 12:07:51.300158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:03.447 [2024-07-26 12:07:51.302090] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.447 [2024-07-26 12:07:51.302121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.447 [2024-07-26 12:07:51.302158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.447 [2024-07-26 12:07:51.302183] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.447 [2024-07-26 12:07:51.302198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.447 [2024-07-26 12:07:51.302211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.447 [2024-07-26 12:07:51.302227] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.447 [2024-07-26 12:07:51.302239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.447 [2024-07-26 12:07:51.302270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.447 [2024-07-26 12:07:51.302284] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:03.447 [2024-07-26 12:07:51.302298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:03.447 [2024-07-26 12:07:51.302311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:03.447 12:07:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:03.447 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:04.013 [2024-07-26 12:07:51.699524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:04.013 [2024-07-26 12:07:51.701930] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:04.013 [2024-07-26 12:07:51.701978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.013 [2024-07-26 12:07:51.701996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.013 [2024-07-26 12:07:51.702021] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:04.014 [2024-07-26 12:07:51.702032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.014 [2024-07-26 12:07:51.702050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.014 [2024-07-26 12:07:51.702063] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:04.014 [2024-07-26 12:07:51.702079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.014 [2024-07-26 12:07:51.702090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.014 [2024-07-26 12:07:51.702106] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:04.014 [2024-07-26 12:07:51.702138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.014 [2024-07-26 12:07:51.702154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.014 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:04.014 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:04.014 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:04.014 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:04.014 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:04.014 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:04.014 12:07:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.014 12:07:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:04.014 12:07:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.014 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:04.014 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:04.272 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:04.272 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:04.272 12:07:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:04.273 12:07:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:04.273 12:07:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:04.273 12:07:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:04.273 12:07:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:04.273 12:07:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:04.273 12:07:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:04.273 12:07:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:04.273 12:07:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:16.535 12:08:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.535 12:08:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:16.535 12:08:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:16.535 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:16.535 12:08:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.535 12:08:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:16.535 [2024-07-26 12:08:04.379134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:14:16.535 [2024-07-26 12:08:04.380901] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.535 [2024-07-26 12:08:04.380945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.536 [2024-07-26 12:08:04.380965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.536 [2024-07-26 12:08:04.380990] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.536 [2024-07-26 12:08:04.381004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.536 [2024-07-26 12:08:04.381016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.536 [2024-07-26 12:08:04.381034] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.536 [2024-07-26 12:08:04.381045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.536 [2024-07-26 12:08:04.381062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.536 [2024-07-26 12:08:04.381076] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:16.536 [2024-07-26 12:08:04.381089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.536 [2024-07-26 12:08:04.381101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:16.536 12:08:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.536 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:16.536 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:17.103 [2024-07-26 12:08:04.778504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:14:17.103 [2024-07-26 12:08:04.781030] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.103 [2024-07-26 12:08:04.781082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.103 [2024-07-26 12:08:04.781099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.103 [2024-07-26 12:08:04.781274] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.103 [2024-07-26 12:08:04.781303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.103 [2024-07-26 12:08:04.781319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.103 [2024-07-26 12:08:04.781335] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.104 [2024-07-26 12:08:04.781349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.104 [2024-07-26 12:08:04.781361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.104 [2024-07-26 12:08:04.781376] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.104 [2024-07-26 12:08:04.781387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.104 [2024-07-26 12:08:04.781404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.104 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:17.104 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:17.104 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:17.104 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:17.104 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:17.104 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:17.104 12:08:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.104 12:08:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:17.104 12:08:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.104 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:17.104 12:08:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:17.104 12:08:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:17.104 12:08:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:17.104 12:08:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:17.363 12:08:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:17.363 12:08:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:17.363 12:08:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:17.363 12:08:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:17.363 12:08:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:17.363 12:08:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:17.363 12:08:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:17.363 12:08:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:29.590 12:08:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:29.590 12:08:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:29.590 12:08:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:29.590 12:08:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:29.590 12:08:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:29.590 12:08:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.590 12:08:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:29.590 12:08:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:29.590 12:08:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.590 12:08:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:29.590 12:08:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:29.590 12:08:17 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.19 00:14:29.591 12:08:17 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.19 00:14:29.591 12:08:17 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:14:29.591 12:08:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.19 00:14:29.591 12:08:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.19 2 00:14:29.591 remove_attach_helper took 45.19s to complete (handling 2 nvme drive(s)) 12:08:17 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:29.591 12:08:17 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 72738 00:14:29.591 12:08:17 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 72738 ']' 00:14:29.591 12:08:17 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 72738 00:14:29.591 12:08:17 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:14:29.591 12:08:17 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:29.591 12:08:17 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72738 00:14:29.591 12:08:17 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:29.591 12:08:17 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:29.591 12:08:17 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72738' 00:14:29.591 killing process with pid 72738 00:14:29.591 12:08:17 sw_hotplug -- common/autotest_common.sh@969 -- # kill 72738 00:14:29.591 12:08:17 sw_hotplug -- common/autotest_common.sh@974 -- # wait 72738 00:14:32.122 12:08:20 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:32.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:33.262 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:33.262 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:33.262 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:33.262 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:33.262 00:14:33.262 real 2m33.746s 00:14:33.262 user 1m51.650s 00:14:33.262 sys 0m22.227s 00:14:33.262 12:08:21 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.262 12:08:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:33.262 ************************************ 00:14:33.262 END TEST sw_hotplug 00:14:33.262 ************************************ 00:14:33.520 12:08:21 -- spdk/autotest.sh@251 -- # [[ 1 -eq 1 ]] 00:14:33.520 12:08:21 -- spdk/autotest.sh@252 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:33.520 12:08:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:33.520 12:08:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.520 12:08:21 -- common/autotest_common.sh@10 -- # set +x 00:14:33.520 ************************************ 00:14:33.520 START TEST nvme_xnvme 00:14:33.520 ************************************ 00:14:33.520 12:08:21 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:33.520 * Looking for test storage... 00:14:33.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:33.520 12:08:21 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:33.520 12:08:21 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.520 12:08:21 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.520 12:08:21 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.520 12:08:21 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.520 12:08:21 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.520 12:08:21 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.520 12:08:21 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:33.521 12:08:21 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.521 12:08:21 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:14:33.521 12:08:21 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:33.521 12:08:21 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.521 12:08:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.521 ************************************ 00:14:33.521 START TEST xnvme_to_malloc_dd_copy 00:14:33.521 ************************************ 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:33.521 12:08:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:33.815 { 00:14:33.815 "subsystems": [ 00:14:33.815 { 00:14:33.815 "subsystem": "bdev", 00:14:33.815 "config": [ 00:14:33.815 { 00:14:33.815 "params": { 00:14:33.815 "block_size": 512, 00:14:33.815 "num_blocks": 2097152, 00:14:33.815 "name": "malloc0" 00:14:33.816 }, 00:14:33.816 "method": "bdev_malloc_create" 00:14:33.816 }, 00:14:33.816 { 00:14:33.816 "params": { 00:14:33.816 "io_mechanism": "libaio", 00:14:33.816 "filename": "/dev/nullb0", 00:14:33.816 "name": "null0" 00:14:33.816 }, 00:14:33.816 "method": "bdev_xnvme_create" 00:14:33.816 }, 00:14:33.816 { 00:14:33.816 "method": "bdev_wait_for_examine" 00:14:33.816 } 00:14:33.816 ] 00:14:33.816 } 00:14:33.816 ] 00:14:33.816 } 00:14:33.816 [2024-07-26 12:08:21.544669] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:14:33.816 [2024-07-26 12:08:21.544936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74108 ] 00:14:33.816 [2024-07-26 12:08:21.721194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.074 [2024-07-26 12:08:21.982039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.045  Copying: 253/1024 [MB] (253 MBps) Copying: 509/1024 [MB] (256 MBps) Copying: 767/1024 [MB] (257 MBps) Copying: 1024/1024 [MB] (average 256 MBps) 00:14:44.045 00:14:44.045 12:08:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:44.045 12:08:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:44.045 12:08:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:44.045 12:08:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:44.045 { 00:14:44.045 "subsystems": [ 00:14:44.045 { 00:14:44.045 "subsystem": "bdev", 00:14:44.045 "config": [ 00:14:44.045 { 00:14:44.045 "params": { 00:14:44.045 "block_size": 512, 00:14:44.045 "num_blocks": 2097152, 00:14:44.045 "name": "malloc0" 00:14:44.045 }, 00:14:44.045 "method": "bdev_malloc_create" 00:14:44.045 }, 00:14:44.045 { 00:14:44.045 "params": { 00:14:44.045 "io_mechanism": "libaio", 00:14:44.045 "filename": "/dev/nullb0", 00:14:44.045 "name": "null0" 00:14:44.045 }, 00:14:44.045 "method": "bdev_xnvme_create" 00:14:44.045 }, 00:14:44.045 { 00:14:44.045 "method": "bdev_wait_for_examine" 00:14:44.045 } 00:14:44.045 ] 00:14:44.045 } 00:14:44.045 ] 00:14:44.045 } 00:14:44.304 [2024-07-26 12:08:32.042668] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:14:44.304 [2024-07-26 12:08:32.042796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74224 ] 00:14:44.304 [2024-07-26 12:08:32.215197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.562 [2024-07-26 12:08:32.450974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.506  Copying: 247/1024 [MB] (247 MBps) Copying: 504/1024 [MB] (256 MBps) Copying: 756/1024 [MB] (252 MBps) Copying: 1012/1024 [MB] (255 MBps) Copying: 1024/1024 [MB] (average 253 MBps) 00:14:54.506 00:14:54.506 12:08:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:54.506 12:08:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:54.506 12:08:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:54.506 12:08:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:54.506 12:08:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:54.506 12:08:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:54.766 { 00:14:54.766 "subsystems": [ 00:14:54.766 { 00:14:54.766 "subsystem": "bdev", 00:14:54.766 "config": [ 00:14:54.766 { 00:14:54.766 "params": { 00:14:54.766 "block_size": 512, 00:14:54.766 "num_blocks": 2097152, 00:14:54.766 "name": "malloc0" 00:14:54.766 }, 00:14:54.766 "method": "bdev_malloc_create" 00:14:54.766 }, 00:14:54.766 { 00:14:54.766 "params": { 00:14:54.766 "io_mechanism": "io_uring", 00:14:54.766 "filename": "/dev/nullb0", 00:14:54.766 "name": "null0" 00:14:54.766 }, 00:14:54.766 "method": "bdev_xnvme_create" 00:14:54.766 }, 00:14:54.766 { 00:14:54.766 "method": "bdev_wait_for_examine" 00:14:54.766 } 00:14:54.766 ] 00:14:54.766 } 00:14:54.766 ] 00:14:54.766 } 00:14:54.766 [2024-07-26 12:08:42.559183] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:14:54.766 [2024-07-26 12:08:42.559338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74339 ] 00:14:54.766 [2024-07-26 12:08:42.731005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.024 [2024-07-26 12:08:42.969703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.987  Copying: 265/1024 [MB] (265 MBps) Copying: 529/1024 [MB] (263 MBps) Copying: 790/1024 [MB] (261 MBps) Copying: 1024/1024 [MB] (average 263 MBps) 00:15:04.987 00:15:04.987 12:08:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:15:04.987 12:08:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:15:04.987 12:08:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:04.987 12:08:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:04.987 { 00:15:04.987 "subsystems": [ 00:15:04.987 { 00:15:04.987 "subsystem": "bdev", 00:15:04.987 "config": [ 00:15:04.987 { 00:15:04.987 "params": { 00:15:04.987 "block_size": 512, 00:15:04.987 "num_blocks": 2097152, 00:15:04.987 "name": "malloc0" 00:15:04.987 }, 00:15:04.987 "method": "bdev_malloc_create" 00:15:04.987 }, 00:15:04.987 { 00:15:04.987 "params": { 00:15:04.987 "io_mechanism": "io_uring", 00:15:04.987 "filename": "/dev/nullb0", 00:15:04.987 "name": "null0" 00:15:04.987 }, 00:15:04.987 "method": "bdev_xnvme_create" 00:15:04.987 }, 00:15:04.987 { 00:15:04.987 "method": "bdev_wait_for_examine" 00:15:04.987 } 00:15:04.987 ] 00:15:04.987 } 00:15:04.987 ] 00:15:04.987 } 00:15:04.987 [2024-07-26 12:08:52.775374] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:15:04.987 [2024-07-26 12:08:52.775502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74454 ] 00:15:04.987 [2024-07-26 12:08:52.944659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.246 [2024-07-26 12:08:53.182653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.212  Copying: 258/1024 [MB] (258 MBps) Copying: 517/1024 [MB] (258 MBps) Copying: 775/1024 [MB] (257 MBps) Copying: 1024/1024 [MB] (average 258 MBps) 00:15:15.212 00:15:15.212 12:09:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:15:15.212 12:09:03 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:15:15.212 00:15:15.212 real 0m41.607s 00:15:15.212 user 0m36.790s 00:15:15.212 sys 0m4.275s 00:15:15.212 ************************************ 00:15:15.212 END TEST xnvme_to_malloc_dd_copy 00:15:15.213 ************************************ 00:15:15.213 12:09:03 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:15.213 12:09:03 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:15.213 12:09:03 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:15.213 12:09:03 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:15.213 12:09:03 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:15.213 12:09:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.213 ************************************ 00:15:15.213 START TEST xnvme_bdevperf 00:15:15.213 ************************************ 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:15.213 12:09:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:15.470 { 00:15:15.470 "subsystems": [ 00:15:15.470 { 00:15:15.470 "subsystem": "bdev", 00:15:15.470 "config": [ 00:15:15.470 { 00:15:15.470 "params": { 00:15:15.470 "io_mechanism": "libaio", 00:15:15.470 "filename": "/dev/nullb0", 00:15:15.470 "name": "null0" 00:15:15.470 }, 00:15:15.470 "method": "bdev_xnvme_create" 00:15:15.470 }, 00:15:15.470 { 00:15:15.470 "method": "bdev_wait_for_examine" 00:15:15.470 } 00:15:15.470 ] 00:15:15.470 } 00:15:15.470 ] 00:15:15.470 } 00:15:15.470 [2024-07-26 12:09:03.242185] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:15:15.470 [2024-07-26 12:09:03.242315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74591 ] 00:15:15.470 [2024-07-26 12:09:03.414373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.728 [2024-07-26 12:09:03.654241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.294 Running I/O for 5 seconds... 00:15:21.589 00:15:21.589 Latency(us) 00:15:21.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.589 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:21.589 null0 : 5.00 155594.02 607.79 0.00 0.00 408.82 134.07 3158.36 00:15:21.589 =================================================================================================================== 00:15:21.589 Total : 155594.02 607.79 0.00 0.00 408.82 134.07 3158.36 00:15:22.541 12:09:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:22.542 12:09:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:22.542 12:09:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:22.542 12:09:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:22.542 12:09:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:22.542 12:09:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:22.542 { 00:15:22.542 "subsystems": [ 00:15:22.542 { 00:15:22.542 "subsystem": "bdev", 00:15:22.542 "config": [ 00:15:22.542 { 00:15:22.542 "params": { 00:15:22.542 "io_mechanism": "io_uring", 00:15:22.542 "filename": "/dev/nullb0", 00:15:22.542 "name": "null0" 00:15:22.542 }, 00:15:22.542 "method": "bdev_xnvme_create" 00:15:22.542 }, 00:15:22.542 { 00:15:22.542 "method": "bdev_wait_for_examine" 00:15:22.542 } 00:15:22.542 ] 00:15:22.542 } 00:15:22.542 ] 00:15:22.542 } 00:15:22.542 [2024-07-26 12:09:10.434099] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:15:22.542 [2024-07-26 12:09:10.434425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74676 ] 00:15:22.806 [2024-07-26 12:09:10.605932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.064 [2024-07-26 12:09:10.838813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.321 Running I/O for 5 seconds... 00:15:28.609 00:15:28.609 Latency(us) 00:15:28.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.609 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:28.609 null0 : 5.00 201838.08 788.43 0.00 0.00 314.64 195.75 430.98 00:15:28.609 =================================================================================================================== 00:15:28.609 Total : 201838.08 788.43 0.00 0.00 314.64 195.75 430.98 00:15:29.986 12:09:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:15:29.986 12:09:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:15:29.986 00:15:29.986 real 0m14.438s 00:15:29.986 user 0m11.115s 00:15:29.986 sys 0m3.108s 00:15:29.986 12:09:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.986 ************************************ 00:15:29.986 END TEST xnvme_bdevperf 00:15:29.986 ************************************ 00:15:29.986 12:09:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:29.986 ************************************ 00:15:29.986 END TEST nvme_xnvme 00:15:29.986 ************************************ 00:15:29.986 00:15:29.986 real 0m56.331s 00:15:29.986 user 0m48.002s 00:15:29.986 sys 0m7.574s 00:15:29.986 12:09:17 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.986 12:09:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.986 12:09:17 -- spdk/autotest.sh@253 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:29.986 12:09:17 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:29.986 12:09:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:29.986 12:09:17 -- common/autotest_common.sh@10 -- # set +x 00:15:29.986 ************************************ 00:15:29.986 START TEST blockdev_xnvme 00:15:29.986 ************************************ 00:15:29.986 12:09:17 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:29.986 * Looking for test storage... 00:15:29.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:29.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74817 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:29.986 12:09:17 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74817 00:15:29.986 12:09:17 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 74817 ']' 00:15:29.986 12:09:17 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.987 12:09:17 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.987 12:09:17 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.987 12:09:17 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.987 12:09:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.987 [2024-07-26 12:09:17.937545] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:15:29.987 [2024-07-26 12:09:17.937914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74817 ] 00:15:30.273 [2024-07-26 12:09:18.108016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.531 [2024-07-26 12:09:18.340008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.466 12:09:19 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.466 12:09:19 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:15:31.466 12:09:19 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:31.466 12:09:19 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:15:31.466 12:09:19 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:31.466 12:09:19 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:31.466 12:09:19 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:32.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:32.035 Waiting for block devices as requested 00:15:32.294 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:32.294 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:32.294 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:32.570 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:37.848 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:37.848 12:09:25 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:37.848 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:15:37.849 nvme0n1 00:15:37.849 nvme1n1 00:15:37.849 nvme2n1 00:15:37.849 nvme2n2 00:15:37.849 nvme2n3 00:15:37.849 nvme3n1 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a1136689-070e-4006-9f8c-1652d020fb4a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a1136689-070e-4006-9f8c-1652d020fb4a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "00da4045-bed1-44e7-9a68-8a26f20c58b7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "00da4045-bed1-44e7-9a68-8a26f20c58b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e3b1aa4c-519f-4163-bcbc-fcf56b000241"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e3b1aa4c-519f-4163-bcbc-fcf56b000241",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "18174629-d6d8-45c8-98cc-0641600443fe"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "18174629-d6d8-45c8-98cc-0641600443fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "bb10b44a-3178-490e-8777-86490e783e21"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bb10b44a-3178-490e-8777-86490e783e21",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "573176b5-9ea8-4d9c-a5c7-f53e8b4d28fb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "573176b5-9ea8-4d9c-a5c7-f53e8b4d28fb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:37.849 12:09:25 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 74817 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 74817 ']' 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 74817 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74817 00:15:37.849 killing process with pid 74817 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74817' 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 74817 00:15:37.849 12:09:25 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 74817 00:15:40.424 12:09:28 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:40.424 12:09:28 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:40.424 12:09:28 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:40.424 12:09:28 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:40.424 12:09:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.424 ************************************ 00:15:40.424 START TEST bdev_hello_world 00:15:40.424 ************************************ 00:15:40.424 12:09:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:40.424 [2024-07-26 12:09:28.379445] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:15:40.424 [2024-07-26 12:09:28.379624] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75203 ] 00:15:40.683 [2024-07-26 12:09:28.543027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.942 [2024-07-26 12:09:28.769521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.532 [2024-07-26 12:09:29.254697] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:41.532 [2024-07-26 12:09:29.254754] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:41.532 [2024-07-26 12:09:29.254773] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:41.532 [2024-07-26 12:09:29.256768] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:41.532 [2024-07-26 12:09:29.257150] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:41.532 [2024-07-26 12:09:29.257173] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:41.532 [2024-07-26 12:09:29.257390] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:41.532 00:15:41.532 [2024-07-26 12:09:29.257411] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:42.907 ************************************ 00:15:42.907 END TEST bdev_hello_world 00:15:42.907 ************************************ 00:15:42.907 00:15:42.907 real 0m2.273s 00:15:42.907 user 0m1.923s 00:15:42.907 sys 0m0.233s 00:15:42.907 12:09:30 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:42.907 12:09:30 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:42.907 12:09:30 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:42.907 12:09:30 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:42.907 12:09:30 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:42.907 12:09:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:42.907 ************************************ 00:15:42.907 START TEST bdev_bounds 00:15:42.907 ************************************ 00:15:42.907 Process bdevio pid: 75246 00:15:42.907 12:09:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:15:42.907 12:09:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=75246 00:15:42.907 12:09:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:42.907 12:09:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:42.907 12:09:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 75246' 00:15:42.907 12:09:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 75246 00:15:42.907 12:09:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 75246 ']' 00:15:42.907 12:09:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.907 12:09:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.907 12:09:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.907 12:09:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.908 12:09:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:42.908 [2024-07-26 12:09:30.695909] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:15:42.908 [2024-07-26 12:09:30.696041] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75246 ] 00:15:42.908 [2024-07-26 12:09:30.870372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:43.165 [2024-07-26 12:09:31.102459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.165 [2024-07-26 12:09:31.102614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.165 [2024-07-26 12:09:31.102656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.735 12:09:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:43.735 12:09:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:15:43.735 12:09:31 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:43.735 I/O targets: 00:15:43.735 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:43.735 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:43.735 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:43.735 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:43.735 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:43.735 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:43.735 00:15:43.735 00:15:43.735 CUnit - A unit testing framework for C - Version 2.1-3 00:15:43.735 http://cunit.sourceforge.net/ 00:15:43.735 00:15:43.735 00:15:43.735 Suite: bdevio tests on: nvme3n1 00:15:43.735 Test: blockdev write read block ...passed 00:15:43.735 Test: blockdev write zeroes read block ...passed 00:15:43.735 Test: blockdev write zeroes read no split ...passed 00:15:43.993 Test: blockdev write zeroes read split ...passed 00:15:43.993 Test: blockdev write zeroes read split partial ...passed 00:15:43.993 Test: blockdev reset ...passed 00:15:43.993 Test: blockdev write read 8 blocks ...passed 00:15:43.993 Test: blockdev write read size > 128k ...passed 00:15:43.993 Test: blockdev write read invalid size ...passed 00:15:43.993 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:43.993 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:43.993 Test: blockdev write read max offset ...passed 00:15:43.993 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:43.993 Test: blockdev writev readv 8 blocks ...passed 00:15:43.993 Test: blockdev writev readv 30 x 1block ...passed 00:15:43.993 Test: blockdev writev readv block ...passed 00:15:43.993 Test: blockdev writev readv size > 128k ...passed 00:15:43.993 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:43.993 Test: blockdev comparev and writev ...passed 00:15:43.993 Test: blockdev nvme passthru rw ...passed 00:15:43.993 Test: blockdev nvme passthru vendor specific ...passed 00:15:43.993 Test: blockdev nvme admin passthru ...passed 00:15:43.993 Test: blockdev copy ...passed 00:15:43.994 Suite: bdevio tests on: nvme2n3 00:15:43.994 Test: blockdev write read block ...passed 00:15:43.994 Test: blockdev write zeroes read block ...passed 00:15:43.994 Test: blockdev write zeroes read no split ...passed 00:15:43.994 Test: blockdev write zeroes read split ...passed 00:15:43.994 Test: blockdev write zeroes read split partial ...passed 00:15:43.994 Test: blockdev reset ...passed 00:15:43.994 Test: blockdev write read 8 blocks ...passed 00:15:43.994 Test: blockdev write read size > 128k ...passed 00:15:43.994 Test: blockdev write read invalid size ...passed 00:15:43.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:43.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:43.994 Test: blockdev write read max offset ...passed 00:15:43.994 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:43.994 Test: blockdev writev readv 8 blocks ...passed 00:15:43.994 Test: blockdev writev readv 30 x 1block ...passed 00:15:43.994 Test: blockdev writev readv block ...passed 00:15:43.994 Test: blockdev writev readv size > 128k ...passed 00:15:43.994 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:43.994 Test: blockdev comparev and writev ...passed 00:15:43.994 Test: blockdev nvme passthru rw ...passed 00:15:43.994 Test: blockdev nvme passthru vendor specific ...passed 00:15:43.994 Test: blockdev nvme admin passthru ...passed 00:15:43.994 Test: blockdev copy ...passed 00:15:43.994 Suite: bdevio tests on: nvme2n2 00:15:43.994 Test: blockdev write read block ...passed 00:15:43.994 Test: blockdev write zeroes read block ...passed 00:15:43.994 Test: blockdev write zeroes read no split ...passed 00:15:43.994 Test: blockdev write zeroes read split ...passed 00:15:43.994 Test: blockdev write zeroes read split partial ...passed 00:15:43.994 Test: blockdev reset ...passed 00:15:43.994 Test: blockdev write read 8 blocks ...passed 00:15:43.994 Test: blockdev write read size > 128k ...passed 00:15:43.994 Test: blockdev write read invalid size ...passed 00:15:43.994 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:43.994 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:43.994 Test: blockdev write read max offset ...passed 00:15:43.994 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:43.994 Test: blockdev writev readv 8 blocks ...passed 00:15:43.994 Test: blockdev writev readv 30 x 1block ...passed 00:15:43.994 Test: blockdev writev readv block ...passed 00:15:43.994 Test: blockdev writev readv size > 128k ...passed 00:15:43.994 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:43.994 Test: blockdev comparev and writev ...passed 00:15:43.994 Test: blockdev nvme passthru rw ...passed 00:15:43.994 Test: blockdev nvme passthru vendor specific ...passed 00:15:43.994 Test: blockdev nvme admin passthru ...passed 00:15:43.994 Test: blockdev copy ...passed 00:15:43.994 Suite: bdevio tests on: nvme2n1 00:15:43.994 Test: blockdev write read block ...passed 00:15:43.994 Test: blockdev write zeroes read block ...passed 00:15:43.994 Test: blockdev write zeroes read no split ...passed 00:15:44.252 Test: blockdev write zeroes read split ...passed 00:15:44.253 Test: blockdev write zeroes read split partial ...passed 00:15:44.253 Test: blockdev reset ...passed 00:15:44.253 Test: blockdev write read 8 blocks ...passed 00:15:44.253 Test: blockdev write read size > 128k ...passed 00:15:44.253 Test: blockdev write read invalid size ...passed 00:15:44.253 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:44.253 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:44.253 Test: blockdev write read max offset ...passed 00:15:44.253 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:44.253 Test: blockdev writev readv 8 blocks ...passed 00:15:44.253 Test: blockdev writev readv 30 x 1block ...passed 00:15:44.253 Test: blockdev writev readv block ...passed 00:15:44.253 Test: blockdev writev readv size > 128k ...passed 00:15:44.253 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:44.253 Test: blockdev comparev and writev ...passed 00:15:44.253 Test: blockdev nvme passthru rw ...passed 00:15:44.253 Test: blockdev nvme passthru vendor specific ...passed 00:15:44.253 Test: blockdev nvme admin passthru ...passed 00:15:44.253 Test: blockdev copy ...passed 00:15:44.253 Suite: bdevio tests on: nvme1n1 00:15:44.253 Test: blockdev write read block ...passed 00:15:44.253 Test: blockdev write zeroes read block ...passed 00:15:44.253 Test: blockdev write zeroes read no split ...passed 00:15:44.253 Test: blockdev write zeroes read split ...passed 00:15:44.253 Test: blockdev write zeroes read split partial ...passed 00:15:44.253 Test: blockdev reset ...passed 00:15:44.253 Test: blockdev write read 8 blocks ...passed 00:15:44.253 Test: blockdev write read size > 128k ...passed 00:15:44.253 Test: blockdev write read invalid size ...passed 00:15:44.253 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:44.253 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:44.253 Test: blockdev write read max offset ...passed 00:15:44.253 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:44.253 Test: blockdev writev readv 8 blocks ...passed 00:15:44.253 Test: blockdev writev readv 30 x 1block ...passed 00:15:44.253 Test: blockdev writev readv block ...passed 00:15:44.253 Test: blockdev writev readv size > 128k ...passed 00:15:44.253 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:44.253 Test: blockdev comparev and writev ...passed 00:15:44.253 Test: blockdev nvme passthru rw ...passed 00:15:44.253 Test: blockdev nvme passthru vendor specific ...passed 00:15:44.253 Test: blockdev nvme admin passthru ...passed 00:15:44.253 Test: blockdev copy ...passed 00:15:44.253 Suite: bdevio tests on: nvme0n1 00:15:44.253 Test: blockdev write read block ...passed 00:15:44.253 Test: blockdev write zeroes read block ...passed 00:15:44.253 Test: blockdev write zeroes read no split ...passed 00:15:44.253 Test: blockdev write zeroes read split ...passed 00:15:44.253 Test: blockdev write zeroes read split partial ...passed 00:15:44.253 Test: blockdev reset ...passed 00:15:44.253 Test: blockdev write read 8 blocks ...passed 00:15:44.253 Test: blockdev write read size > 128k ...passed 00:15:44.253 Test: blockdev write read invalid size ...passed 00:15:44.253 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:44.253 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:44.253 Test: blockdev write read max offset ...passed 00:15:44.253 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:44.253 Test: blockdev writev readv 8 blocks ...passed 00:15:44.253 Test: blockdev writev readv 30 x 1block ...passed 00:15:44.253 Test: blockdev writev readv block ...passed 00:15:44.253 Test: blockdev writev readv size > 128k ...passed 00:15:44.253 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:44.253 Test: blockdev comparev and writev ...passed 00:15:44.253 Test: blockdev nvme passthru rw ...passed 00:15:44.253 Test: blockdev nvme passthru vendor specific ...passed 00:15:44.253 Test: blockdev nvme admin passthru ...passed 00:15:44.253 Test: blockdev copy ...passed 00:15:44.253 00:15:44.253 Run Summary: Type Total Ran Passed Failed Inactive 00:15:44.253 suites 6 6 n/a 0 0 00:15:44.253 tests 138 138 138 0 0 00:15:44.253 asserts 780 780 780 0 n/a 00:15:44.253 00:15:44.253 Elapsed time = 1.343 seconds 00:15:44.253 0 00:15:44.253 12:09:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 75246 00:15:44.253 12:09:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 75246 ']' 00:15:44.253 12:09:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 75246 00:15:44.253 12:09:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:15:44.253 12:09:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:44.253 12:09:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75246 00:15:44.512 12:09:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:44.512 killing process with pid 75246 00:15:44.513 12:09:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:44.513 12:09:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75246' 00:15:44.513 12:09:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 75246 00:15:44.513 12:09:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 75246 00:15:45.888 12:09:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:45.888 00:15:45.888 real 0m2.930s 00:15:45.888 user 0m6.729s 00:15:45.888 sys 0m0.430s 00:15:45.888 ************************************ 00:15:45.888 END TEST bdev_bounds 00:15:45.888 ************************************ 00:15:45.888 12:09:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:45.888 12:09:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:45.888 12:09:33 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:45.888 12:09:33 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:45.888 12:09:33 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:45.888 12:09:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.888 ************************************ 00:15:45.888 START TEST bdev_nbd 00:15:45.888 ************************************ 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=75306 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 75306 /var/tmp/spdk-nbd.sock 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 75306 ']' 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:45.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:45.888 12:09:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:45.888 [2024-07-26 12:09:33.703825] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:15:45.888 [2024-07-26 12:09:33.703956] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.267 [2024-07-26 12:09:33.874839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.267 [2024-07-26 12:09:34.107015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.833 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:46.833 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:15:46.833 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:46.833 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.833 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:46.833 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:46.833 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:46.833 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.833 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:46.833 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:46.833 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:46.833 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:46.833 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:46.833 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:46.833 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.093 1+0 records in 00:15:47.093 1+0 records out 00:15:47.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050647 s, 8.1 MB/s 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:47.093 12:09:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.351 1+0 records in 00:15:47.351 1+0 records out 00:15:47.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000711531 s, 5.8 MB/s 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:47.351 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.351 1+0 records in 00:15:47.351 1+0 records out 00:15:47.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561193 s, 7.3 MB/s 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.610 1+0 records in 00:15:47.610 1+0 records out 00:15:47.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760857 s, 5.4 MB/s 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:47.610 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.868 1+0 records in 00:15:47.868 1+0 records out 00:15:47.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741078 s, 5.5 MB/s 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:47.868 12:09:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.128 1+0 records in 00:15:48.128 1+0 records out 00:15:48.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000848973 s, 4.8 MB/s 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:48.128 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:48.387 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:48.387 { 00:15:48.387 "nbd_device": "/dev/nbd0", 00:15:48.387 "bdev_name": "nvme0n1" 00:15:48.387 }, 00:15:48.387 { 00:15:48.387 "nbd_device": "/dev/nbd1", 00:15:48.387 "bdev_name": "nvme1n1" 00:15:48.387 }, 00:15:48.387 { 00:15:48.387 "nbd_device": "/dev/nbd2", 00:15:48.387 "bdev_name": "nvme2n1" 00:15:48.387 }, 00:15:48.387 { 00:15:48.387 "nbd_device": "/dev/nbd3", 00:15:48.387 "bdev_name": "nvme2n2" 00:15:48.387 }, 00:15:48.387 { 00:15:48.387 "nbd_device": "/dev/nbd4", 00:15:48.387 "bdev_name": "nvme2n3" 00:15:48.387 }, 00:15:48.387 { 00:15:48.387 "nbd_device": "/dev/nbd5", 00:15:48.387 "bdev_name": "nvme3n1" 00:15:48.387 } 00:15:48.387 ]' 00:15:48.387 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:48.387 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:48.387 { 00:15:48.387 "nbd_device": "/dev/nbd0", 00:15:48.387 "bdev_name": "nvme0n1" 00:15:48.387 }, 00:15:48.387 { 00:15:48.387 "nbd_device": "/dev/nbd1", 00:15:48.387 "bdev_name": "nvme1n1" 00:15:48.387 }, 00:15:48.387 { 00:15:48.387 "nbd_device": "/dev/nbd2", 00:15:48.387 "bdev_name": "nvme2n1" 00:15:48.387 }, 00:15:48.387 { 00:15:48.387 "nbd_device": "/dev/nbd3", 00:15:48.387 "bdev_name": "nvme2n2" 00:15:48.387 }, 00:15:48.387 { 00:15:48.387 "nbd_device": "/dev/nbd4", 00:15:48.387 "bdev_name": "nvme2n3" 00:15:48.387 }, 00:15:48.387 { 00:15:48.387 "nbd_device": "/dev/nbd5", 00:15:48.387 "bdev_name": "nvme3n1" 00:15:48.387 } 00:15:48.387 ]' 00:15:48.387 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:48.387 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:48.387 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:48.387 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:48.387 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:48.387 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:48.387 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.387 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:48.645 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:48.645 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:48.645 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:48.645 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.645 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.645 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:48.645 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:48.645 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.645 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.645 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:48.903 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:48.903 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:48.903 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:48.903 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.903 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.903 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:48.903 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:48.903 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.903 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.903 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:49.166 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:49.167 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:49.167 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:49.167 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.167 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.167 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:49.167 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:49.167 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.167 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.167 12:09:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:49.167 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:49.167 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:49.167 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:49.167 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.167 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.167 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:49.167 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:49.167 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.167 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.167 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:49.425 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:49.425 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:49.425 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:49.425 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.425 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.425 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:49.425 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:49.425 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.425 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.425 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:49.682 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:49.682 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:49.682 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:49.682 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.682 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.682 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:49.682 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:49.682 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.682 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:49.682 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:49.682 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:49.940 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:50.200 /dev/nbd0 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.200 1+0 records in 00:15:50.200 1+0 records out 00:15:50.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541467 s, 7.6 MB/s 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:50.200 12:09:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.200 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:50.200 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:50.459 /dev/nbd1 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.459 1+0 records in 00:15:50.459 1+0 records out 00:15:50.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000824513 s, 5.0 MB/s 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:50.459 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:15:50.716 /dev/nbd10 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.717 1+0 records in 00:15:50.717 1+0 records out 00:15:50.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565143 s, 7.2 MB/s 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:50.717 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:15:50.978 /dev/nbd11 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.978 1+0 records in 00:15:50.978 1+0 records out 00:15:50.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701583 s, 5.8 MB/s 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:15:50.978 /dev/nbd12 00:15:50.978 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.237 1+0 records in 00:15:51.237 1+0 records out 00:15:51.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000966704 s, 4.2 MB/s 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:51.237 12:09:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:51.237 /dev/nbd13 00:15:51.237 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:51.237 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:51.237 12:09:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:15:51.237 12:09:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:51.237 12:09:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:51.237 12:09:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:51.237 12:09:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:15:51.237 12:09:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:51.237 12:09:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:51.237 12:09:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:51.237 12:09:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.237 1+0 records in 00:15:51.237 1+0 records out 00:15:51.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00140888 s, 2.9 MB/s 00:15:51.237 12:09:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.496 12:09:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:51.496 12:09:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.496 12:09:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:51.496 12:09:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:51.496 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.496 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:51.496 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:51.496 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:51.496 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:51.496 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:51.496 { 00:15:51.496 "nbd_device": "/dev/nbd0", 00:15:51.496 "bdev_name": "nvme0n1" 00:15:51.496 }, 00:15:51.496 { 00:15:51.496 "nbd_device": "/dev/nbd1", 00:15:51.496 "bdev_name": "nvme1n1" 00:15:51.496 }, 00:15:51.496 { 00:15:51.496 "nbd_device": "/dev/nbd10", 00:15:51.496 "bdev_name": "nvme2n1" 00:15:51.496 }, 00:15:51.496 { 00:15:51.496 "nbd_device": "/dev/nbd11", 00:15:51.496 "bdev_name": "nvme2n2" 00:15:51.496 }, 00:15:51.496 { 00:15:51.496 "nbd_device": "/dev/nbd12", 00:15:51.496 "bdev_name": "nvme2n3" 00:15:51.496 }, 00:15:51.496 { 00:15:51.496 "nbd_device": "/dev/nbd13", 00:15:51.496 "bdev_name": "nvme3n1" 00:15:51.496 } 00:15:51.496 ]' 00:15:51.496 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:51.496 { 00:15:51.496 "nbd_device": "/dev/nbd0", 00:15:51.496 "bdev_name": "nvme0n1" 00:15:51.496 }, 00:15:51.496 { 00:15:51.496 "nbd_device": "/dev/nbd1", 00:15:51.496 "bdev_name": "nvme1n1" 00:15:51.496 }, 00:15:51.496 { 00:15:51.496 "nbd_device": "/dev/nbd10", 00:15:51.496 "bdev_name": "nvme2n1" 00:15:51.496 }, 00:15:51.496 { 00:15:51.496 "nbd_device": "/dev/nbd11", 00:15:51.496 "bdev_name": "nvme2n2" 00:15:51.496 }, 00:15:51.496 { 00:15:51.496 "nbd_device": "/dev/nbd12", 00:15:51.496 "bdev_name": "nvme2n3" 00:15:51.496 }, 00:15:51.496 { 00:15:51.496 "nbd_device": "/dev/nbd13", 00:15:51.496 "bdev_name": "nvme3n1" 00:15:51.496 } 00:15:51.496 ]' 00:15:51.496 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:51.755 /dev/nbd1 00:15:51.755 /dev/nbd10 00:15:51.755 /dev/nbd11 00:15:51.755 /dev/nbd12 00:15:51.755 /dev/nbd13' 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:51.755 /dev/nbd1 00:15:51.755 /dev/nbd10 00:15:51.755 /dev/nbd11 00:15:51.755 /dev/nbd12 00:15:51.755 /dev/nbd13' 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:51.755 256+0 records in 00:15:51.755 256+0 records out 00:15:51.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112508 s, 93.2 MB/s 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:51.755 256+0 records in 00:15:51.755 256+0 records out 00:15:51.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120865 s, 8.7 MB/s 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:51.755 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:52.014 256+0 records in 00:15:52.014 256+0 records out 00:15:52.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155905 s, 6.7 MB/s 00:15:52.014 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:52.014 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:52.014 256+0 records in 00:15:52.014 256+0 records out 00:15:52.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121977 s, 8.6 MB/s 00:15:52.014 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:52.014 12:09:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:52.273 256+0 records in 00:15:52.273 256+0 records out 00:15:52.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117043 s, 9.0 MB/s 00:15:52.273 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:52.273 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:52.273 256+0 records in 00:15:52.273 256+0 records out 00:15:52.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.11728 s, 8.9 MB/s 00:15:52.273 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:52.273 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:52.532 256+0 records in 00:15:52.532 256+0 records out 00:15:52.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121805 s, 8.6 MB/s 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.532 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.791 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:53.050 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:53.050 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.050 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:53.050 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:53.050 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:53.050 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:53.050 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:53.050 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.050 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.050 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:53.050 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:53.050 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.050 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:53.050 12:09:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:53.308 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:53.308 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:53.308 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:53.308 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.308 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.308 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:53.308 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:53.308 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.308 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:53.308 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:53.567 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:53.567 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:53.567 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:53.567 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.567 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.567 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:53.567 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:53.567 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.567 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:53.567 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:53.825 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:53.825 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:53.825 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:53.825 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.825 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.825 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:53.825 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:53.825 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.825 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:53.825 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:53.825 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:15:54.083 12:09:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:54.343 malloc_lvol_verify 00:15:54.343 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:54.343 4e9585b5-dd42-418f-858a-e3c93de198d1 00:15:54.343 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:54.601 03feedfd-bbb1-4baf-84b8-1d19f65f8118 00:15:54.601 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:54.859 /dev/nbd0 00:15:54.859 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:15:54.859 mke2fs 1.46.5 (30-Dec-2021) 00:15:54.859 Discarding device blocks: 0/4096 done 00:15:54.859 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:54.859 00:15:54.859 Allocating group tables: 0/1 done 00:15:54.859 Writing inode tables: 0/1 done 00:15:54.859 Creating journal (1024 blocks): done 00:15:54.859 Writing superblocks and filesystem accounting information: 0/1 done 00:15:54.859 00:15:54.859 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:15:54.859 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:54.859 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:54.859 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:54.859 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:54.859 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:54.859 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:54.859 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 75306 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 75306 ']' 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 75306 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75306 00:15:55.118 killing process with pid 75306 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75306' 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 75306 00:15:55.118 12:09:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 75306 00:15:56.497 ************************************ 00:15:56.497 END TEST bdev_nbd 00:15:56.497 ************************************ 00:15:56.497 12:09:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:56.497 00:15:56.497 real 0m10.732s 00:15:56.497 user 0m13.754s 00:15:56.497 sys 0m4.319s 00:15:56.497 12:09:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.497 12:09:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:56.497 12:09:44 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:56.497 12:09:44 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:15:56.497 12:09:44 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:15:56.497 12:09:44 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:15:56.497 12:09:44 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:56.497 12:09:44 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.497 12:09:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:56.497 ************************************ 00:15:56.497 START TEST bdev_fio 00:15:56.497 ************************************ 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:56.497 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:56.497 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:56.756 ************************************ 00:15:56.756 START TEST bdev_fio_rw_verify 00:15:56.756 ************************************ 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:56.756 12:09:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:56.756 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:56.756 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:56.756 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:56.756 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:56.756 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:56.756 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:56.756 fio-3.35 00:15:56.756 Starting 6 threads 00:16:08.980 00:16:08.980 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75714: Fri Jul 26 12:09:55 2024 00:16:08.980 read: IOPS=33.3k, BW=130MiB/s (137MB/s)(1302MiB/10001msec) 00:16:08.980 slat (usec): min=2, max=816, avg= 6.11, stdev= 4.08 00:16:08.980 clat (usec): min=106, max=254520, avg=567.82, stdev=1259.45 00:16:08.980 lat (usec): min=110, max=254534, avg=573.93, stdev=1259.58 00:16:08.980 clat percentiles (usec): 00:16:08.980 | 50.000th=[ 586], 99.000th=[ 1074], 99.900th=[ 2024], 00:16:08.980 | 99.990th=[ 3621], 99.999th=[254804] 00:16:08.980 write: IOPS=33.7k, BW=132MiB/s (138MB/s)(1317MiB/10001msec); 0 zone resets 00:16:08.980 slat (usec): min=7, max=3577, avg=21.84, stdev=28.46 00:16:08.980 clat (usec): min=64, max=4496, avg=642.38, stdev=220.12 00:16:08.980 lat (usec): min=99, max=4535, avg=664.23, stdev=224.28 00:16:08.980 clat percentiles (usec): 00:16:08.980 | 50.000th=[ 644], 99.000th=[ 1352], 99.900th=[ 2114], 99.990th=[ 2802], 00:16:08.980 | 99.999th=[ 4228] 00:16:08.980 bw ( KiB/s): min=94488, max=171376, per=100.00%, avg=135056.68, stdev=2985.57, samples=114 00:16:08.980 iops : min=23622, max=42844, avg=33763.95, stdev=746.40, samples=114 00:16:08.980 lat (usec) : 100=0.01%, 250=4.39%, 500=23.62%, 750=56.01%, 1000=12.66% 00:16:08.980 lat (msec) : 2=3.19%, 4=0.12%, 10=0.01%, 500=0.01% 00:16:08.980 cpu : usr=58.07%, sys=28.87%, ctx=8736, majf=0, minf=27698 00:16:08.980 IO depths : 1=11.9%, 2=24.2%, 4=50.7%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:08.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.980 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.980 issued rwts: total=333393,337096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.980 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:08.980 00:16:08.980 Run status group 0 (all jobs): 00:16:08.980 READ: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=1302MiB (1366MB), run=10001-10001msec 00:16:08.980 WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=1317MiB (1381MB), run=10001-10001msec 00:16:09.240 ----------------------------------------------------- 00:16:09.240 Suppressions used: 00:16:09.241 count bytes template 00:16:09.241 6 48 /usr/src/fio/parse.c 00:16:09.241 3460 332160 /usr/src/fio/iolog.c 00:16:09.241 1 8 libtcmalloc_minimal.so 00:16:09.241 1 904 libcrypto.so 00:16:09.241 ----------------------------------------------------- 00:16:09.241 00:16:09.241 00:16:09.241 real 0m12.521s 00:16:09.241 user 0m36.907s 00:16:09.241 sys 0m17.685s 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:09.241 ************************************ 00:16:09.241 END TEST bdev_fio_rw_verify 00:16:09.241 ************************************ 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a1136689-070e-4006-9f8c-1652d020fb4a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a1136689-070e-4006-9f8c-1652d020fb4a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "00da4045-bed1-44e7-9a68-8a26f20c58b7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "00da4045-bed1-44e7-9a68-8a26f20c58b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e3b1aa4c-519f-4163-bcbc-fcf56b000241"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e3b1aa4c-519f-4163-bcbc-fcf56b000241",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "18174629-d6d8-45c8-98cc-0641600443fe"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "18174629-d6d8-45c8-98cc-0641600443fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "bb10b44a-3178-490e-8777-86490e783e21"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bb10b44a-3178-490e-8777-86490e783e21",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "573176b5-9ea8-4d9c-a5c7-f53e8b4d28fb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "573176b5-9ea8-4d9c-a5c7-f53e8b4d28fb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:09.241 /home/vagrant/spdk_repo/spdk 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:09.241 00:16:09.241 real 0m12.733s 00:16:09.241 user 0m37.017s 00:16:09.241 sys 0m17.792s 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:09.241 12:09:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:09.241 ************************************ 00:16:09.241 END TEST bdev_fio 00:16:09.241 ************************************ 00:16:09.241 12:09:57 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:09.241 12:09:57 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:09.241 12:09:57 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:16:09.241 12:09:57 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:09.241 12:09:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:09.241 ************************************ 00:16:09.241 START TEST bdev_verify 00:16:09.241 ************************************ 00:16:09.241 12:09:57 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:09.500 [2024-07-26 12:09:57.296289] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:16:09.500 [2024-07-26 12:09:57.296442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75890 ] 00:16:09.500 [2024-07-26 12:09:57.468734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:09.759 [2024-07-26 12:09:57.695897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.759 [2024-07-26 12:09:57.695950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.327 Running I/O for 5 seconds... 00:16:15.615 00:16:15.615 Latency(us) 00:16:15.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.615 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:15.615 Verification LBA range: start 0x0 length 0xa0000 00:16:15.615 nvme0n1 : 5.05 1874.99 7.32 0.00 0.00 68160.85 9001.33 65693.92 00:16:15.615 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:15.615 Verification LBA range: start 0xa0000 length 0xa0000 00:16:15.615 nvme0n1 : 5.07 1995.57 7.80 0.00 0.00 63791.24 14949.58 59377.20 00:16:15.615 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:15.615 Verification LBA range: start 0x0 length 0xbd0bd 00:16:15.615 nvme1n1 : 5.04 2988.70 11.67 0.00 0.00 42669.01 4211.15 61482.77 00:16:15.615 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:15.615 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:15.615 nvme1n1 : 5.07 3067.99 11.98 0.00 0.00 41350.77 3947.95 53481.59 00:16:15.615 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:15.615 Verification LBA range: start 0x0 length 0x80000 00:16:15.615 nvme2n1 : 5.06 1898.19 7.41 0.00 0.00 67077.12 11528.02 64430.57 00:16:15.615 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:15.615 Verification LBA range: start 0x80000 length 0x80000 00:16:15.616 nvme2n1 : 5.07 2018.99 7.89 0.00 0.00 62807.87 10896.35 55587.16 00:16:15.616 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:15.616 Verification LBA range: start 0x0 length 0x80000 00:16:15.616 nvme2n2 : 5.06 1872.27 7.31 0.00 0.00 67906.36 13212.48 59377.20 00:16:15.616 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:15.616 Verification LBA range: start 0x80000 length 0x80000 00:16:15.616 nvme2n2 : 5.07 1994.26 7.79 0.00 0.00 64085.76 8053.82 69062.84 00:16:15.616 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:15.616 Verification LBA range: start 0x0 length 0x80000 00:16:15.616 nvme2n3 : 5.06 1870.69 7.31 0.00 0.00 67880.40 14528.46 64851.69 00:16:15.616 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:15.616 Verification LBA range: start 0x80000 length 0x80000 00:16:15.616 nvme2n3 : 5.06 1998.84 7.81 0.00 0.00 63828.53 9159.25 61482.77 00:16:15.616 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:15.616 Verification LBA range: start 0x0 length 0x20000 00:16:15.616 nvme3n1 : 5.07 1869.41 7.30 0.00 0.00 67859.18 10317.31 69062.84 00:16:15.616 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:15.616 Verification LBA range: start 0x20000 length 0x20000 00:16:15.616 nvme3n1 : 5.06 1997.05 7.80 0.00 0.00 63816.14 11633.30 57692.74 00:16:15.616 =================================================================================================================== 00:16:15.616 Total : 25446.95 99.40 0.00 0.00 60028.54 3947.95 69062.84 00:16:16.994 00:16:16.994 real 0m7.399s 00:16:16.994 user 0m11.040s 00:16:16.994 sys 0m2.138s 00:16:16.994 12:10:04 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:16.994 12:10:04 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:16.994 ************************************ 00:16:16.994 END TEST bdev_verify 00:16:16.994 ************************************ 00:16:16.994 12:10:04 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:16.994 12:10:04 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:16:16.994 12:10:04 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:16.994 12:10:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:16.994 ************************************ 00:16:16.994 START TEST bdev_verify_big_io 00:16:16.994 ************************************ 00:16:16.994 12:10:04 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:16.994 [2024-07-26 12:10:04.762984] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:16:16.994 [2024-07-26 12:10:04.763114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75995 ] 00:16:16.994 [2024-07-26 12:10:04.935206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:17.253 [2024-07-26 12:10:05.176037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.254 [2024-07-26 12:10:05.176088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.191 Running I/O for 5 seconds... 00:16:24.857 00:16:24.857 Latency(us) 00:16:24.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.857 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.857 Verification LBA range: start 0x0 length 0xa000 00:16:24.857 nvme0n1 : 5.78 166.10 10.38 0.00 0.00 744303.63 98961.99 936559.45 00:16:24.857 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.857 Verification LBA range: start 0xa000 length 0xa000 00:16:24.857 nvme0n1 : 5.78 155.61 9.73 0.00 0.00 794872.45 21792.69 822016.21 00:16:24.857 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.857 Verification LBA range: start 0x0 length 0xbd0b 00:16:24.857 nvme1n1 : 5.78 171.54 10.72 0.00 0.00 703892.98 8159.10 1037627.01 00:16:24.857 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.857 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:24.857 nvme1n1 : 5.78 163.34 10.21 0.00 0.00 739343.53 53271.03 1199335.12 00:16:24.857 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.857 Verification LBA range: start 0x0 length 0x8000 00:16:24.857 nvme2n1 : 5.79 174.91 10.93 0.00 0.00 673539.51 130545.61 663677.02 00:16:24.857 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.857 Verification LBA range: start 0x8000 length 0x8000 00:16:24.857 nvme2n1 : 5.78 152.18 9.51 0.00 0.00 770235.83 68641.72 943297.29 00:16:24.857 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.857 Verification LBA range: start 0x0 length 0x8000 00:16:24.857 nvme2n2 : 5.79 140.88 8.81 0.00 0.00 826690.66 48007.09 1489062.14 00:16:24.857 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.857 Verification LBA range: start 0x8000 length 0x8000 00:16:24.857 nvme2n2 : 5.81 173.51 10.84 0.00 0.00 666553.38 25056.33 781589.18 00:16:24.857 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.857 Verification LBA range: start 0x0 length 0x8000 00:16:24.857 nvme2n3 : 5.82 134.74 8.42 0.00 0.00 842823.79 98540.88 1644032.41 00:16:24.857 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.857 Verification LBA range: start 0x8000 length 0x8000 00:16:24.857 nvme2n3 : 5.81 151.41 9.46 0.00 0.00 744389.42 99804.22 1037627.01 00:16:24.857 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:24.857 Verification LBA range: start 0x0 length 0x2000 00:16:24.857 nvme3n1 : 5.81 187.33 11.71 0.00 0.00 596339.63 3237.32 559240.53 00:16:24.857 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:24.857 Verification LBA range: start 0x2000 length 0x2000 00:16:24.857 nvme3n1 : 5.83 205.90 12.87 0.00 0.00 539390.74 2276.65 1118481.07 00:16:24.857 =================================================================================================================== 00:16:24.857 Total : 1977.46 123.59 0.00 0.00 710384.87 2276.65 1644032.41 00:16:25.500 00:16:25.500 real 0m8.585s 00:16:25.500 user 0m15.168s 00:16:25.500 sys 0m0.658s 00:16:25.500 12:10:13 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.500 12:10:13 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.500 ************************************ 00:16:25.500 END TEST bdev_verify_big_io 00:16:25.500 ************************************ 00:16:25.500 12:10:13 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:25.500 12:10:13 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:25.500 12:10:13 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.500 12:10:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:25.500 ************************************ 00:16:25.500 START TEST bdev_write_zeroes 00:16:25.500 ************************************ 00:16:25.500 12:10:13 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:25.500 [2024-07-26 12:10:13.419161] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:16:25.500 [2024-07-26 12:10:13.419295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76107 ] 00:16:25.758 [2024-07-26 12:10:13.590805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.016 [2024-07-26 12:10:13.857846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.583 Running I/O for 1 seconds... 00:16:27.520 00:16:27.520 Latency(us) 00:16:27.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.520 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:27.520 nvme0n1 : 1.01 9423.12 36.81 0.00 0.00 13571.68 7790.62 28004.14 00:16:27.520 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:27.520 nvme1n1 : 1.01 12128.79 47.38 0.00 0.00 10519.66 4474.35 21161.02 00:16:27.520 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:27.520 nvme2n1 : 1.01 9477.51 37.02 0.00 0.00 13420.24 4921.78 28635.81 00:16:27.520 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:27.520 nvme2n2 : 1.01 9467.72 36.98 0.00 0.00 13417.65 4869.14 29056.93 00:16:27.520 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:27.520 nvme2n3 : 1.02 9458.03 36.95 0.00 0.00 13420.89 4974.42 29267.48 00:16:27.520 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:27.520 nvme3n1 : 1.02 9449.49 36.91 0.00 0.00 13427.46 5106.02 29478.04 00:16:27.520 =================================================================================================================== 00:16:27.520 Total : 59404.67 232.05 0.00 0.00 12853.71 4474.35 29478.04 00:16:28.898 00:16:28.898 real 0m3.348s 00:16:28.898 user 0m2.532s 00:16:28.898 sys 0m0.629s 00:16:28.898 12:10:16 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.898 12:10:16 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:28.898 ************************************ 00:16:28.898 END TEST bdev_write_zeroes 00:16:28.898 ************************************ 00:16:28.898 12:10:16 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:28.898 12:10:16 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:28.898 12:10:16 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.898 12:10:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:28.898 ************************************ 00:16:28.898 START TEST bdev_json_nonenclosed 00:16:28.898 ************************************ 00:16:28.898 12:10:16 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:28.898 [2024-07-26 12:10:16.839881] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:16:28.898 [2024-07-26 12:10:16.840011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76174 ] 00:16:29.157 [2024-07-26 12:10:17.010298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.416 [2024-07-26 12:10:17.246698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.416 [2024-07-26 12:10:17.246790] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:29.416 [2024-07-26 12:10:17.246815] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:29.416 [2024-07-26 12:10:17.246830] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:29.983 00:16:29.983 real 0m0.948s 00:16:29.983 user 0m0.693s 00:16:29.983 sys 0m0.150s 00:16:29.983 12:10:17 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:29.983 12:10:17 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:29.983 ************************************ 00:16:29.983 END TEST bdev_json_nonenclosed 00:16:29.983 ************************************ 00:16:29.983 12:10:17 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:29.983 12:10:17 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:29.983 12:10:17 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:29.984 12:10:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:29.984 ************************************ 00:16:29.984 START TEST bdev_json_nonarray 00:16:29.984 ************************************ 00:16:29.984 12:10:17 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:29.984 [2024-07-26 12:10:17.851944] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:16:29.984 [2024-07-26 12:10:17.852079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76205 ] 00:16:30.243 [2024-07-26 12:10:18.023065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.502 [2024-07-26 12:10:18.255691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.502 [2024-07-26 12:10:18.255797] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:30.502 [2024-07-26 12:10:18.255822] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:30.502 [2024-07-26 12:10:18.255836] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:30.761 00:16:30.761 real 0m0.957s 00:16:30.761 user 0m0.705s 00:16:30.761 sys 0m0.146s 00:16:30.761 ************************************ 00:16:30.761 END TEST bdev_json_nonarray 00:16:30.761 ************************************ 00:16:30.761 12:10:18 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.761 12:10:18 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:31.020 12:10:18 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:16:31.020 12:10:18 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:16:31.020 12:10:18 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:16:31.020 12:10:18 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:31.020 12:10:18 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:31.020 12:10:18 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:31.020 12:10:18 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:31.020 12:10:18 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:31.020 12:10:18 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:31.020 12:10:18 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:31.020 12:10:18 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:31.020 12:10:18 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:31.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:38.154 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:38.154 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:38.154 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:38.413 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:38.413 00:16:38.413 real 1m8.591s 00:16:38.413 user 1m41.308s 00:16:38.413 sys 0m41.747s 00:16:38.413 12:10:26 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.413 ************************************ 00:16:38.413 END TEST blockdev_xnvme 00:16:38.413 12:10:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:38.413 ************************************ 00:16:38.413 12:10:26 -- spdk/autotest.sh@255 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:38.413 12:10:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:38.413 12:10:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.413 12:10:26 -- common/autotest_common.sh@10 -- # set +x 00:16:38.413 ************************************ 00:16:38.413 START TEST ublk 00:16:38.414 ************************************ 00:16:38.414 12:10:26 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:38.673 * Looking for test storage... 00:16:38.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:38.673 12:10:26 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:38.673 12:10:26 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:38.673 12:10:26 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:38.673 12:10:26 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:38.673 12:10:26 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:38.673 12:10:26 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:38.673 12:10:26 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:38.673 12:10:26 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:38.673 12:10:26 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:38.673 12:10:26 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:38.673 12:10:26 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:38.673 12:10:26 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:38.673 12:10:26 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:38.673 12:10:26 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:38.673 12:10:26 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:38.673 12:10:26 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:38.673 12:10:26 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:38.673 12:10:26 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:38.673 12:10:26 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:38.673 12:10:26 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:38.673 12:10:26 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:38.673 12:10:26 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.673 12:10:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.673 ************************************ 00:16:38.673 START TEST test_save_ublk_config 00:16:38.673 ************************************ 00:16:38.673 12:10:26 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:16:38.673 12:10:26 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:38.673 12:10:26 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=76501 00:16:38.673 12:10:26 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:38.673 12:10:26 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:38.673 12:10:26 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 76501 00:16:38.673 12:10:26 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 76501 ']' 00:16:38.673 12:10:26 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.673 12:10:26 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.673 12:10:26 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.673 12:10:26 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.673 12:10:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:38.673 [2024-07-26 12:10:26.581581] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:16:38.673 [2024-07-26 12:10:26.581735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76501 ] 00:16:38.931 [2024-07-26 12:10:26.755288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.190 [2024-07-26 12:10:27.004993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.128 12:10:27 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:40.128 12:10:27 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:16:40.128 12:10:27 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:40.128 12:10:27 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:40.128 12:10:27 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.128 12:10:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:40.128 [2024-07-26 12:10:27.962148] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:40.128 [2024-07-26 12:10:27.963349] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:40.128 malloc0 00:16:40.128 [2024-07-26 12:10:28.050399] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:40.128 [2024-07-26 12:10:28.050506] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:40.128 [2024-07-26 12:10:28.050519] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:40.128 [2024-07-26 12:10:28.050532] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:40.128 [2024-07-26 12:10:28.062144] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:40.128 [2024-07-26 12:10:28.062203] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:40.128 [2024-07-26 12:10:28.073143] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:40.128 [2024-07-26 12:10:28.073280] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:40.128 [2024-07-26 12:10:28.090142] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:40.128 0 00:16:40.128 12:10:28 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.128 12:10:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:40.128 12:10:28 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.128 12:10:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:40.696 12:10:28 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.696 12:10:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:40.696 "subsystems": [ 00:16:40.696 { 00:16:40.696 "subsystem": "keyring", 00:16:40.696 "config": [] 00:16:40.696 }, 00:16:40.696 { 00:16:40.696 "subsystem": "iobuf", 00:16:40.696 "config": [ 00:16:40.696 { 00:16:40.696 "method": "iobuf_set_options", 00:16:40.696 "params": { 00:16:40.696 "small_pool_count": 8192, 00:16:40.696 "large_pool_count": 1024, 00:16:40.696 "small_bufsize": 8192, 00:16:40.696 "large_bufsize": 135168 00:16:40.696 } 00:16:40.696 } 00:16:40.696 ] 00:16:40.696 }, 00:16:40.696 { 00:16:40.696 "subsystem": "sock", 00:16:40.696 "config": [ 00:16:40.696 { 00:16:40.697 "method": "sock_set_default_impl", 00:16:40.697 "params": { 00:16:40.697 "impl_name": "posix" 00:16:40.697 } 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "method": "sock_impl_set_options", 00:16:40.697 "params": { 00:16:40.697 "impl_name": "ssl", 00:16:40.697 "recv_buf_size": 4096, 00:16:40.697 "send_buf_size": 4096, 00:16:40.697 "enable_recv_pipe": true, 00:16:40.697 "enable_quickack": false, 00:16:40.697 "enable_placement_id": 0, 00:16:40.697 "enable_zerocopy_send_server": true, 00:16:40.697 "enable_zerocopy_send_client": false, 00:16:40.697 "zerocopy_threshold": 0, 00:16:40.697 "tls_version": 0, 00:16:40.697 "enable_ktls": false 00:16:40.697 } 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "method": "sock_impl_set_options", 00:16:40.697 "params": { 00:16:40.697 "impl_name": "posix", 00:16:40.697 "recv_buf_size": 2097152, 00:16:40.697 "send_buf_size": 2097152, 00:16:40.697 "enable_recv_pipe": true, 00:16:40.697 "enable_quickack": false, 00:16:40.697 "enable_placement_id": 0, 00:16:40.697 "enable_zerocopy_send_server": true, 00:16:40.697 "enable_zerocopy_send_client": false, 00:16:40.697 "zerocopy_threshold": 0, 00:16:40.697 "tls_version": 0, 00:16:40.697 "enable_ktls": false 00:16:40.697 } 00:16:40.697 } 00:16:40.697 ] 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "subsystem": "vmd", 00:16:40.697 "config": [] 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "subsystem": "accel", 00:16:40.697 "config": [ 00:16:40.697 { 00:16:40.697 "method": "accel_set_options", 00:16:40.697 "params": { 00:16:40.697 "small_cache_size": 128, 00:16:40.697 "large_cache_size": 16, 00:16:40.697 "task_count": 2048, 00:16:40.697 "sequence_count": 2048, 00:16:40.697 "buf_count": 2048 00:16:40.697 } 00:16:40.697 } 00:16:40.697 ] 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "subsystem": "bdev", 00:16:40.697 "config": [ 00:16:40.697 { 00:16:40.697 "method": "bdev_set_options", 00:16:40.697 "params": { 00:16:40.697 "bdev_io_pool_size": 65535, 00:16:40.697 "bdev_io_cache_size": 256, 00:16:40.697 "bdev_auto_examine": true, 00:16:40.697 "iobuf_small_cache_size": 128, 00:16:40.697 "iobuf_large_cache_size": 16 00:16:40.697 } 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "method": "bdev_raid_set_options", 00:16:40.697 "params": { 00:16:40.697 "process_window_size_kb": 1024, 00:16:40.697 "process_max_bandwidth_mb_sec": 0 00:16:40.697 } 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "method": "bdev_iscsi_set_options", 00:16:40.697 "params": { 00:16:40.697 "timeout_sec": 30 00:16:40.697 } 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "method": "bdev_nvme_set_options", 00:16:40.697 "params": { 00:16:40.697 "action_on_timeout": "none", 00:16:40.697 "timeout_us": 0, 00:16:40.697 "timeout_admin_us": 0, 00:16:40.697 "keep_alive_timeout_ms": 10000, 00:16:40.697 "arbitration_burst": 0, 00:16:40.697 "low_priority_weight": 0, 00:16:40.697 "medium_priority_weight": 0, 00:16:40.697 "high_priority_weight": 0, 00:16:40.697 "nvme_adminq_poll_period_us": 10000, 00:16:40.697 "nvme_ioq_poll_period_us": 0, 00:16:40.697 "io_queue_requests": 0, 00:16:40.697 "delay_cmd_submit": true, 00:16:40.697 "transport_retry_count": 4, 00:16:40.697 "bdev_retry_count": 3, 00:16:40.697 "transport_ack_timeout": 0, 00:16:40.697 "ctrlr_loss_timeout_sec": 0, 00:16:40.697 "reconnect_delay_sec": 0, 00:16:40.697 "fast_io_fail_timeout_sec": 0, 00:16:40.697 "disable_auto_failback": false, 00:16:40.697 "generate_uuids": false, 00:16:40.697 "transport_tos": 0, 00:16:40.697 "nvme_error_stat": false, 00:16:40.697 "rdma_srq_size": 0, 00:16:40.697 "io_path_stat": false, 00:16:40.697 "allow_accel_sequence": false, 00:16:40.697 "rdma_max_cq_size": 0, 00:16:40.697 "rdma_cm_event_timeout_ms": 0, 00:16:40.697 "dhchap_digests": [ 00:16:40.697 "sha256", 00:16:40.697 "sha384", 00:16:40.697 "sha512" 00:16:40.697 ], 00:16:40.697 "dhchap_dhgroups": [ 00:16:40.697 "null", 00:16:40.697 "ffdhe2048", 00:16:40.697 "ffdhe3072", 00:16:40.697 "ffdhe4096", 00:16:40.697 "ffdhe6144", 00:16:40.697 "ffdhe8192" 00:16:40.697 ] 00:16:40.697 } 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "method": "bdev_nvme_set_hotplug", 00:16:40.697 "params": { 00:16:40.697 "period_us": 100000, 00:16:40.697 "enable": false 00:16:40.697 } 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "method": "bdev_malloc_create", 00:16:40.697 "params": { 00:16:40.697 "name": "malloc0", 00:16:40.697 "num_blocks": 8192, 00:16:40.697 "block_size": 4096, 00:16:40.697 "physical_block_size": 4096, 00:16:40.697 "uuid": "01675fd0-c5ae-47d7-a36a-e989284589e0", 00:16:40.697 "optimal_io_boundary": 0, 00:16:40.697 "md_size": 0, 00:16:40.697 "dif_type": 0, 00:16:40.697 "dif_is_head_of_md": false, 00:16:40.697 "dif_pi_format": 0 00:16:40.697 } 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "method": "bdev_wait_for_examine" 00:16:40.697 } 00:16:40.697 ] 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "subsystem": "scsi", 00:16:40.697 "config": null 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "subsystem": "scheduler", 00:16:40.697 "config": [ 00:16:40.697 { 00:16:40.697 "method": "framework_set_scheduler", 00:16:40.697 "params": { 00:16:40.697 "name": "static" 00:16:40.697 } 00:16:40.697 } 00:16:40.697 ] 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "subsystem": "vhost_scsi", 00:16:40.697 "config": [] 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "subsystem": "vhost_blk", 00:16:40.697 "config": [] 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "subsystem": "ublk", 00:16:40.697 "config": [ 00:16:40.697 { 00:16:40.697 "method": "ublk_create_target", 00:16:40.697 "params": { 00:16:40.697 "cpumask": "1" 00:16:40.697 } 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "method": "ublk_start_disk", 00:16:40.697 "params": { 00:16:40.697 "bdev_name": "malloc0", 00:16:40.697 "ublk_id": 0, 00:16:40.697 "num_queues": 1, 00:16:40.697 "queue_depth": 128 00:16:40.697 } 00:16:40.697 } 00:16:40.697 ] 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "subsystem": "nbd", 00:16:40.697 "config": [] 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "subsystem": "nvmf", 00:16:40.697 "config": [ 00:16:40.697 { 00:16:40.697 "method": "nvmf_set_config", 00:16:40.697 "params": { 00:16:40.697 "discovery_filter": "match_any", 00:16:40.697 "admin_cmd_passthru": { 00:16:40.697 "identify_ctrlr": false 00:16:40.697 } 00:16:40.697 } 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "method": "nvmf_set_max_subsystems", 00:16:40.697 "params": { 00:16:40.697 "max_subsystems": 1024 00:16:40.697 } 00:16:40.697 }, 00:16:40.697 { 00:16:40.697 "method": "nvmf_set_crdt", 00:16:40.697 "params": { 00:16:40.697 "crdt1": 0, 00:16:40.697 "crdt2": 0, 00:16:40.697 "crdt3": 0 00:16:40.697 } 00:16:40.697 } 00:16:40.697 ] 00:16:40.698 }, 00:16:40.698 { 00:16:40.698 "subsystem": "iscsi", 00:16:40.698 "config": [ 00:16:40.698 { 00:16:40.698 "method": "iscsi_set_options", 00:16:40.698 "params": { 00:16:40.698 "node_base": "iqn.2016-06.io.spdk", 00:16:40.698 "max_sessions": 128, 00:16:40.698 "max_connections_per_session": 2, 00:16:40.698 "max_queue_depth": 64, 00:16:40.698 "default_time2wait": 2, 00:16:40.698 "default_time2retain": 20, 00:16:40.698 "first_burst_length": 8192, 00:16:40.698 "immediate_data": true, 00:16:40.698 "allow_duplicated_isid": false, 00:16:40.698 "error_recovery_level": 0, 00:16:40.698 "nop_timeout": 60, 00:16:40.698 "nop_in_interval": 30, 00:16:40.698 "disable_chap": false, 00:16:40.698 "require_chap": false, 00:16:40.698 "mutual_chap": false, 00:16:40.698 "chap_group": 0, 00:16:40.698 "max_large_datain_per_connection": 64, 00:16:40.698 "max_r2t_per_connection": 4, 00:16:40.698 "pdu_pool_size": 36864, 00:16:40.698 "immediate_data_pool_size": 16384, 00:16:40.698 "data_out_pool_size": 2048 00:16:40.698 } 00:16:40.698 } 00:16:40.698 ] 00:16:40.698 } 00:16:40.698 ] 00:16:40.698 }' 00:16:40.698 12:10:28 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 76501 00:16:40.698 12:10:28 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 76501 ']' 00:16:40.698 12:10:28 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 76501 00:16:40.698 12:10:28 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:16:40.698 12:10:28 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:40.698 12:10:28 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76501 00:16:40.698 12:10:28 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:40.698 killing process with pid 76501 00:16:40.698 12:10:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:40.698 12:10:28 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76501' 00:16:40.698 12:10:28 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 76501 00:16:40.698 12:10:28 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 76501 00:16:42.076 [2024-07-26 12:10:29.903793] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:42.076 [2024-07-26 12:10:29.935220] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:42.076 [2024-07-26 12:10:29.935392] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:42.076 [2024-07-26 12:10:29.946154] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:42.076 [2024-07-26 12:10:29.946216] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:42.076 [2024-07-26 12:10:29.946227] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:42.076 [2024-07-26 12:10:29.946258] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:16:42.076 [2024-07-26 12:10:29.946428] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:16:43.454 12:10:31 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=76571 00:16:43.454 12:10:31 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 76571 00:16:43.454 12:10:31 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 76571 ']' 00:16:43.454 12:10:31 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.454 12:10:31 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:43.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.454 12:10:31 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.454 12:10:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:43.454 12:10:31 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:43.454 12:10:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:43.454 12:10:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:43.454 "subsystems": [ 00:16:43.454 { 00:16:43.454 "subsystem": "keyring", 00:16:43.454 "config": [] 00:16:43.454 }, 00:16:43.454 { 00:16:43.454 "subsystem": "iobuf", 00:16:43.454 "config": [ 00:16:43.454 { 00:16:43.454 "method": "iobuf_set_options", 00:16:43.454 "params": { 00:16:43.454 "small_pool_count": 8192, 00:16:43.454 "large_pool_count": 1024, 00:16:43.454 "small_bufsize": 8192, 00:16:43.454 "large_bufsize": 135168 00:16:43.454 } 00:16:43.454 } 00:16:43.454 ] 00:16:43.454 }, 00:16:43.454 { 00:16:43.454 "subsystem": "sock", 00:16:43.454 "config": [ 00:16:43.454 { 00:16:43.454 "method": "sock_set_default_impl", 00:16:43.454 "params": { 00:16:43.454 "impl_name": "posix" 00:16:43.454 } 00:16:43.454 }, 00:16:43.454 { 00:16:43.454 "method": "sock_impl_set_options", 00:16:43.454 "params": { 00:16:43.454 "impl_name": "ssl", 00:16:43.454 "recv_buf_size": 4096, 00:16:43.454 "send_buf_size": 4096, 00:16:43.454 "enable_recv_pipe": true, 00:16:43.454 "enable_quickack": false, 00:16:43.454 "enable_placement_id": 0, 00:16:43.454 "enable_zerocopy_send_server": true, 00:16:43.454 "enable_zerocopy_send_client": false, 00:16:43.454 "zerocopy_threshold": 0, 00:16:43.454 "tls_version": 0, 00:16:43.454 "enable_ktls": false 00:16:43.454 } 00:16:43.454 }, 00:16:43.454 { 00:16:43.454 "method": "sock_impl_set_options", 00:16:43.454 "params": { 00:16:43.454 "impl_name": "posix", 00:16:43.454 "recv_buf_size": 2097152, 00:16:43.454 "send_buf_size": 2097152, 00:16:43.454 "enable_recv_pipe": true, 00:16:43.454 "enable_quickack": false, 00:16:43.454 "enable_placement_id": 0, 00:16:43.454 "enable_zerocopy_send_server": true, 00:16:43.454 "enable_zerocopy_send_client": false, 00:16:43.454 "zerocopy_threshold": 0, 00:16:43.454 "tls_version": 0, 00:16:43.454 "enable_ktls": false 00:16:43.454 } 00:16:43.454 } 00:16:43.454 ] 00:16:43.454 }, 00:16:43.454 { 00:16:43.454 "subsystem": "vmd", 00:16:43.454 "config": [] 00:16:43.454 }, 00:16:43.454 { 00:16:43.454 "subsystem": "accel", 00:16:43.454 "config": [ 00:16:43.454 { 00:16:43.454 "method": "accel_set_options", 00:16:43.454 "params": { 00:16:43.454 "small_cache_size": 128, 00:16:43.454 "large_cache_size": 16, 00:16:43.454 "task_count": 2048, 00:16:43.454 "sequence_count": 2048, 00:16:43.454 "buf_count": 2048 00:16:43.454 } 00:16:43.454 } 00:16:43.454 ] 00:16:43.454 }, 00:16:43.454 { 00:16:43.454 "subsystem": "bdev", 00:16:43.454 "config": [ 00:16:43.454 { 00:16:43.454 "method": "bdev_set_options", 00:16:43.454 "params": { 00:16:43.454 "bdev_io_pool_size": 65535, 00:16:43.454 "bdev_io_cache_size": 256, 00:16:43.454 "bdev_auto_examine": true, 00:16:43.454 "iobuf_small_cache_size": 128, 00:16:43.454 "iobuf_large_cache_size": 16 00:16:43.454 } 00:16:43.454 }, 00:16:43.454 { 00:16:43.454 "method": "bdev_raid_set_options", 00:16:43.454 "params": { 00:16:43.454 "process_window_size_kb": 1024, 00:16:43.454 "process_max_bandwidth_mb_sec": 0 00:16:43.454 } 00:16:43.454 }, 00:16:43.454 { 00:16:43.454 "method": "bdev_iscsi_set_options", 00:16:43.454 "params": { 00:16:43.454 "timeout_sec": 30 00:16:43.454 } 00:16:43.454 }, 00:16:43.454 { 00:16:43.454 "method": "bdev_nvme_set_options", 00:16:43.454 "params": { 00:16:43.454 "action_on_timeout": "none", 00:16:43.454 "timeout_us": 0, 00:16:43.454 "timeout_admin_us": 0, 00:16:43.454 "keep_alive_timeout_ms": 10000, 00:16:43.454 "arbitration_burst": 0, 00:16:43.454 "low_priority_weight": 0, 00:16:43.454 "medium_priority_weight": 0, 00:16:43.454 "high_priority_weight": 0, 00:16:43.454 "nvme_adminq_poll_period_us": 10000, 00:16:43.454 "nvme_ioq_poll_period_us": 0, 00:16:43.454 "io_queue_requests": 0, 00:16:43.454 "delay_cmd_submit": true, 00:16:43.454 "transport_retry_count": 4, 00:16:43.454 "bdev_retry_count": 3, 00:16:43.454 "transport_ack_timeout": 0, 00:16:43.454 "ctrlr_loss_timeout_sec": 0, 00:16:43.454 "reconnect_delay_sec": 0, 00:16:43.454 "fast_io_fail_timeout_sec": 0, 00:16:43.454 "disable_auto_failback": false, 00:16:43.454 "generate_uuids": false, 00:16:43.454 "transport_tos": 0, 00:16:43.454 "nvme_error_stat": false, 00:16:43.454 "rdma_srq_size": 0, 00:16:43.454 "io_path_stat": false, 00:16:43.454 "allow_accel_sequence": false, 00:16:43.454 "rdma_max_cq_size": 0, 00:16:43.454 "rdma_cm_event_timeout_ms": 0, 00:16:43.454 "dhchap_digests": [ 00:16:43.454 "sha256", 00:16:43.454 "sha384", 00:16:43.454 "sha512" 00:16:43.454 ], 00:16:43.454 "dhchap_dhgroups": [ 00:16:43.454 "null", 00:16:43.454 "ffdhe2048", 00:16:43.454 "ffdhe3072", 00:16:43.454 "ffdhe4096", 00:16:43.454 "ffdhe6144", 00:16:43.454 "ffdhe8192" 00:16:43.454 ] 00:16:43.454 } 00:16:43.454 }, 00:16:43.455 { 00:16:43.455 "method": "bdev_nvme_set_hotplug", 00:16:43.455 "params": { 00:16:43.455 "period_us": 100000, 00:16:43.455 "enable": false 00:16:43.455 } 00:16:43.455 }, 00:16:43.455 { 00:16:43.455 "method": "bdev_malloc_create", 00:16:43.455 "params": { 00:16:43.455 "name": "malloc0", 00:16:43.455 "num_blocks": 8192, 00:16:43.455 "block_size": 4096, 00:16:43.455 "physical_block_size": 4096, 00:16:43.455 "uuid": "01675fd0-c5ae-47d7-a36a-e989284589e0", 00:16:43.455 "optimal_io_boundary": 0, 00:16:43.455 "md_size": 0, 00:16:43.455 "dif_type": 0, 00:16:43.455 "dif_is_head_of_md": false, 00:16:43.455 "dif_pi_format": 0 00:16:43.455 } 00:16:43.455 }, 00:16:43.455 { 00:16:43.455 "method": "bdev_wait_for_examine" 00:16:43.455 } 00:16:43.455 ] 00:16:43.455 }, 00:16:43.455 { 00:16:43.455 "subsystem": "scsi", 00:16:43.455 "config": null 00:16:43.455 }, 00:16:43.455 { 00:16:43.455 "subsystem": "scheduler", 00:16:43.455 "config": [ 00:16:43.455 { 00:16:43.455 "method": "framework_set_scheduler", 00:16:43.455 "params": { 00:16:43.455 "name": "static" 00:16:43.455 } 00:16:43.455 } 00:16:43.455 ] 00:16:43.455 }, 00:16:43.455 { 00:16:43.455 "subsystem": "vhost_scsi", 00:16:43.455 "config": [] 00:16:43.455 }, 00:16:43.455 { 00:16:43.455 "subsystem": "vhost_blk", 00:16:43.455 "config": [] 00:16:43.455 }, 00:16:43.455 { 00:16:43.455 "subsystem": "ublk", 00:16:43.455 "config": [ 00:16:43.455 { 00:16:43.455 "method": "ublk_create_target", 00:16:43.455 "params": { 00:16:43.455 "cpumask": "1" 00:16:43.455 } 00:16:43.455 }, 00:16:43.455 { 00:16:43.455 "method": "ublk_start_disk", 00:16:43.455 "params": { 00:16:43.455 "bdev_name": "malloc0", 00:16:43.455 "ublk_id": 0, 00:16:43.455 "num_queues": 1, 00:16:43.455 "queue_depth": 128 00:16:43.455 } 00:16:43.455 } 00:16:43.455 ] 00:16:43.455 }, 00:16:43.455 { 00:16:43.455 "subsystem": "nbd", 00:16:43.455 "config": [] 00:16:43.455 }, 00:16:43.455 { 00:16:43.455 "subsystem": "nvmf", 00:16:43.455 "config": [ 00:16:43.455 { 00:16:43.455 "method": "nvmf_set_config", 00:16:43.455 "params": { 00:16:43.455 "discovery_filter": "match_any", 00:16:43.455 "admin_cmd_passthru": { 00:16:43.455 "identify_ctrlr": false 00:16:43.455 } 00:16:43.455 } 00:16:43.455 }, 00:16:43.455 { 00:16:43.455 "method": "nvmf_set_max_subsystems", 00:16:43.455 "params": { 00:16:43.455 "max_subsystems": 1024 00:16:43.455 } 00:16:43.455 }, 00:16:43.455 { 00:16:43.455 "method": "nvmf_set_crdt", 00:16:43.455 "params": { 00:16:43.455 "crdt1": 0, 00:16:43.455 "crdt2": 0, 00:16:43.455 "crdt3": 0 00:16:43.455 } 00:16:43.455 } 00:16:43.455 ] 00:16:43.455 }, 00:16:43.455 { 00:16:43.455 "subsystem": "iscsi", 00:16:43.455 "config": [ 00:16:43.455 { 00:16:43.455 "method": "iscsi_set_options", 00:16:43.455 "params": { 00:16:43.455 "node_base": "iqn.2016-06.io.spdk", 00:16:43.455 "max_sessions": 128, 00:16:43.455 "max_connections_per_session": 2, 00:16:43.455 "max_queue_depth": 64, 00:16:43.455 "default_time2wait": 2, 00:16:43.455 "default_time2retain": 20, 00:16:43.455 "first_burst_length": 8192, 00:16:43.455 "immediate_data": true, 00:16:43.455 "allow_duplicated_isid": false, 00:16:43.455 "error_recovery_level": 0, 00:16:43.455 "nop_timeout": 60, 00:16:43.455 "nop_in_interval": 30, 00:16:43.455 "disable_chap": false, 00:16:43.455 "require_chap": false, 00:16:43.455 "mutual_chap": false, 00:16:43.455 "chap_group": 0, 00:16:43.455 "max_large_datain_per_connection": 64, 00:16:43.455 "max_r2t_per_connection": 4, 00:16:43.455 "pdu_pool_size": 36864, 00:16:43.455 "immediate_data_pool_size": 16384, 00:16:43.455 "data_out_pool_size": 2048 00:16:43.455 } 00:16:43.455 } 00:16:43.455 ] 00:16:43.455 } 00:16:43.455 ] 00:16:43.455 }' 00:16:43.714 [2024-07-26 12:10:31.520702] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:16:43.714 [2024-07-26 12:10:31.520825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76571 ] 00:16:43.714 [2024-07-26 12:10:31.691906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.973 [2024-07-26 12:10:31.930521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.351 [2024-07-26 12:10:33.012156] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:45.351 [2024-07-26 12:10:33.013543] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:45.351 [2024-07-26 12:10:33.020299] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:45.351 [2024-07-26 12:10:33.020418] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:45.351 [2024-07-26 12:10:33.020430] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:45.351 [2024-07-26 12:10:33.020440] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:45.351 [2024-07-26 12:10:33.029227] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:45.351 [2024-07-26 12:10:33.029253] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:45.351 [2024-07-26 12:10:33.036156] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:45.351 [2024-07-26 12:10:33.036265] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:45.351 [2024-07-26 12:10:33.053148] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 76571 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 76571 ']' 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 76571 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76571 00:16:45.351 killing process with pid 76571 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76571' 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 76571 00:16:45.351 12:10:33 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 76571 00:16:47.253 [2024-07-26 12:10:34.742536] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:47.253 [2024-07-26 12:10:34.784165] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:47.253 [2024-07-26 12:10:34.784372] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:47.253 [2024-07-26 12:10:34.792167] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:47.253 [2024-07-26 12:10:34.792238] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:47.253 [2024-07-26 12:10:34.792248] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:47.253 [2024-07-26 12:10:34.792279] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:16:47.253 [2024-07-26 12:10:34.792447] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:16:48.631 12:10:36 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:48.631 00:16:48.631 real 0m9.787s 00:16:48.631 user 0m8.381s 00:16:48.631 sys 0m2.074s 00:16:48.631 12:10:36 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:48.631 ************************************ 00:16:48.631 END TEST test_save_ublk_config 00:16:48.631 12:10:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:48.631 ************************************ 00:16:48.631 12:10:36 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76661 00:16:48.631 12:10:36 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:48.631 12:10:36 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:48.631 12:10:36 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76661 00:16:48.631 12:10:36 ublk -- common/autotest_common.sh@831 -- # '[' -z 76661 ']' 00:16:48.631 12:10:36 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.631 12:10:36 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:48.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.631 12:10:36 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.631 12:10:36 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:48.631 12:10:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.631 [2024-07-26 12:10:36.411809] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:16:48.631 [2024-07-26 12:10:36.411976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76661 ] 00:16:48.631 [2024-07-26 12:10:36.585090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:48.890 [2024-07-26 12:10:36.824713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.890 [2024-07-26 12:10:36.824744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.826 12:10:37 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:49.826 12:10:37 ublk -- common/autotest_common.sh@864 -- # return 0 00:16:49.826 12:10:37 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:49.826 12:10:37 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:49.826 12:10:37 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:49.826 12:10:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:49.826 ************************************ 00:16:49.826 START TEST test_create_ublk 00:16:49.826 ************************************ 00:16:49.826 12:10:37 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:16:49.826 12:10:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:49.826 12:10:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.826 12:10:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:50.085 [2024-07-26 12:10:37.809139] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:50.085 [2024-07-26 12:10:37.812175] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:50.085 12:10:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.085 12:10:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:50.085 12:10:37 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:50.085 12:10:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.085 12:10:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:50.344 12:10:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.344 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:50.344 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:50.344 12:10:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.344 12:10:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:50.344 [2024-07-26 12:10:38.136315] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:50.344 [2024-07-26 12:10:38.136767] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:50.344 [2024-07-26 12:10:38.136788] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:50.344 [2024-07-26 12:10:38.136801] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:50.344 [2024-07-26 12:10:38.143183] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:50.344 [2024-07-26 12:10:38.143219] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:50.344 [2024-07-26 12:10:38.151152] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:50.344 [2024-07-26 12:10:38.161343] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:50.344 [2024-07-26 12:10:38.194177] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:50.344 12:10:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.345 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:50.345 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:50.345 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:50.345 12:10:38 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.345 12:10:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:50.345 12:10:38 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.345 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:50.345 { 00:16:50.345 "ublk_device": "/dev/ublkb0", 00:16:50.345 "id": 0, 00:16:50.345 "queue_depth": 512, 00:16:50.345 "num_queues": 4, 00:16:50.345 "bdev_name": "Malloc0" 00:16:50.345 } 00:16:50.345 ]' 00:16:50.345 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:50.345 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:50.345 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:50.345 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:50.345 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:50.604 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:50.604 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:50.604 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:50.604 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:50.604 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:50.604 12:10:38 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:50.604 12:10:38 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:50.604 12:10:38 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:50.604 12:10:38 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:50.604 12:10:38 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:50.604 12:10:38 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:50.604 12:10:38 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:50.604 12:10:38 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:50.604 12:10:38 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:50.604 12:10:38 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:50.604 12:10:38 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:50.604 12:10:38 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:50.604 fio: verification read phase will never start because write phase uses all of runtime 00:16:50.604 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:50.604 fio-3.35 00:16:50.604 Starting 1 process 00:17:02.908 00:17:02.908 fio_test: (groupid=0, jobs=1): err= 0: pid=76713: Fri Jul 26 12:10:48 2024 00:17:02.908 write: IOPS=12.9k, BW=50.5MiB/s (53.0MB/s)(505MiB/10000msec); 0 zone resets 00:17:02.908 clat (usec): min=48, max=7883, avg=76.45, stdev=138.12 00:17:02.908 lat (usec): min=49, max=7885, avg=76.91, stdev=138.14 00:17:02.908 clat percentiles (usec): 00:17:02.908 | 1.00th=[ 64], 5.00th=[ 65], 10.00th=[ 65], 20.00th=[ 67], 00:17:02.908 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:17:02.908 | 70.00th=[ 71], 80.00th=[ 73], 90.00th=[ 78], 95.00th=[ 83], 00:17:02.908 | 99.00th=[ 98], 99.50th=[ 109], 99.90th=[ 3032], 99.95th=[ 3523], 00:17:02.908 | 99.99th=[ 3982] 00:17:02.908 bw ( KiB/s): min=18360, max=54440, per=99.90%, avg=51684.95, stdev=8095.12, samples=19 00:17:02.908 iops : min= 4590, max=13610, avg=12921.21, stdev=2023.77, samples=19 00:17:02.908 lat (usec) : 50=0.01%, 100=99.13%, 250=0.57%, 500=0.01%, 750=0.01% 00:17:02.908 lat (usec) : 1000=0.02% 00:17:02.908 lat (msec) : 2=0.07%, 4=0.17%, 10=0.01% 00:17:02.908 cpu : usr=2.54%, sys=9.13%, ctx=129340, majf=0, minf=798 00:17:02.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:02.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.908 issued rwts: total=0,129338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:02.908 00:17:02.908 Run status group 0 (all jobs): 00:17:02.908 WRITE: bw=50.5MiB/s (53.0MB/s), 50.5MiB/s-50.5MiB/s (53.0MB/s-53.0MB/s), io=505MiB (530MB), run=10000-10000msec 00:17:02.908 00:17:02.908 Disk stats (read/write): 00:17:02.908 ublkb0: ios=0/127930, merge=0/0, ticks=0/8820, in_queue=8820, util=99.12% 00:17:02.908 12:10:48 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.908 [2024-07-26 12:10:48.707521] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:02.908 [2024-07-26 12:10:48.744187] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:02.908 [2024-07-26 12:10:48.745303] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:02.908 [2024-07-26 12:10:48.746389] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:02.908 [2024-07-26 12:10:48.746680] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:02.908 [2024-07-26 12:10:48.746698] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.908 12:10:48 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.908 [2024-07-26 12:10:48.765254] ublk.c:1053:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:02.908 request: 00:17:02.908 { 00:17:02.908 "ublk_id": 0, 00:17:02.908 "method": "ublk_stop_disk", 00:17:02.908 "req_id": 1 00:17:02.908 } 00:17:02.908 Got JSON-RPC error response 00:17:02.908 response: 00:17:02.908 { 00:17:02.908 "code": -19, 00:17:02.908 "message": "No such device" 00:17:02.908 } 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:02.908 12:10:48 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.908 [2024-07-26 12:10:48.781239] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:17:02.908 [2024-07-26 12:10:48.789145] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:17:02.908 [2024-07-26 12:10:48.789188] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.908 12:10:48 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.908 12:10:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.908 12:10:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.908 12:10:49 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:02.908 12:10:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:02.908 12:10:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.908 12:10:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.908 12:10:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.908 12:10:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:02.909 12:10:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:02.909 12:10:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:02.909 12:10:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:02.909 12:10:49 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.909 12:10:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.909 12:10:49 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.909 12:10:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:02.909 12:10:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:02.909 12:10:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:02.909 00:17:02.909 real 0m11.501s 00:17:02.909 user 0m0.655s 00:17:02.909 sys 0m1.050s 00:17:02.909 12:10:49 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:02.909 12:10:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.909 ************************************ 00:17:02.909 END TEST test_create_ublk 00:17:02.909 ************************************ 00:17:02.909 12:10:49 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:02.909 12:10:49 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:02.909 12:10:49 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:02.909 12:10:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.909 ************************************ 00:17:02.909 START TEST test_create_multi_ublk 00:17:02.909 ************************************ 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.909 [2024-07-26 12:10:49.376146] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:02.909 [2024-07-26 12:10:49.379095] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.909 [2024-07-26 12:10:49.721306] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:02.909 [2024-07-26 12:10:49.721769] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:02.909 [2024-07-26 12:10:49.721792] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:02.909 [2024-07-26 12:10:49.721802] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:02.909 [2024-07-26 12:10:49.730451] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:02.909 [2024-07-26 12:10:49.730482] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:02.909 [2024-07-26 12:10:49.737164] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:02.909 [2024-07-26 12:10:49.737752] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:02.909 [2024-07-26 12:10:49.760171] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.909 12:10:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.909 [2024-07-26 12:10:50.123302] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:02.909 [2024-07-26 12:10:50.123753] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:02.909 [2024-07-26 12:10:50.123774] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:02.909 [2024-07-26 12:10:50.123787] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:02.909 [2024-07-26 12:10:50.135148] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:02.909 [2024-07-26 12:10:50.135184] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:02.909 [2024-07-26 12:10:50.146135] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:02.909 [2024-07-26 12:10:50.146728] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:02.909 [2024-07-26 12:10:50.155195] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.909 [2024-07-26 12:10:50.513325] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:02.909 [2024-07-26 12:10:50.513800] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:02.909 [2024-07-26 12:10:50.513827] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:02.909 [2024-07-26 12:10:50.513838] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:02.909 [2024-07-26 12:10:50.521216] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:02.909 [2024-07-26 12:10:50.521246] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:02.909 [2024-07-26 12:10:50.529161] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:02.909 [2024-07-26 12:10:50.529810] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:02.909 [2024-07-26 12:10:50.538194] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.909 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.909 [2024-07-26 12:10:50.885299] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:02.909 [2024-07-26 12:10:50.885760] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:02.909 [2024-07-26 12:10:50.885805] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:02.909 [2024-07-26 12:10:50.885818] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:03.168 [2024-07-26 12:10:50.893207] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:03.168 [2024-07-26 12:10:50.893248] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:03.168 [2024-07-26 12:10:50.901157] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:03.168 [2024-07-26 12:10:50.901765] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:03.168 [2024-07-26 12:10:50.922156] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:03.168 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.168 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:03.168 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:03.168 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.168 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.168 12:10:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.168 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:03.168 { 00:17:03.168 "ublk_device": "/dev/ublkb0", 00:17:03.168 "id": 0, 00:17:03.168 "queue_depth": 512, 00:17:03.168 "num_queues": 4, 00:17:03.168 "bdev_name": "Malloc0" 00:17:03.168 }, 00:17:03.168 { 00:17:03.168 "ublk_device": "/dev/ublkb1", 00:17:03.168 "id": 1, 00:17:03.168 "queue_depth": 512, 00:17:03.168 "num_queues": 4, 00:17:03.168 "bdev_name": "Malloc1" 00:17:03.168 }, 00:17:03.168 { 00:17:03.168 "ublk_device": "/dev/ublkb2", 00:17:03.168 "id": 2, 00:17:03.168 "queue_depth": 512, 00:17:03.168 "num_queues": 4, 00:17:03.168 "bdev_name": "Malloc2" 00:17:03.168 }, 00:17:03.168 { 00:17:03.168 "ublk_device": "/dev/ublkb3", 00:17:03.168 "id": 3, 00:17:03.168 "queue_depth": 512, 00:17:03.168 "num_queues": 4, 00:17:03.168 "bdev_name": "Malloc3" 00:17:03.168 } 00:17:03.168 ]' 00:17:03.168 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:03.168 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.168 12:10:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:03.168 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:03.168 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:03.168 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:03.168 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:03.168 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:03.168 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:03.168 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:03.168 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:03.427 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:03.685 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:03.685 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:03.685 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:03.685 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:03.685 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:03.685 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:03.685 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:03.685 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.685 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:03.685 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:03.685 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:03.685 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:03.685 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:03.943 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:03.943 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:03.943 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:03.943 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:03.943 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:03.943 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:03.943 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:03.943 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.943 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:03.943 12:10:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.943 12:10:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.943 [2024-07-26 12:10:51.770323] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:03.943 [2024-07-26 12:10:51.807516] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:03.943 [2024-07-26 12:10:51.810483] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:03.943 [2024-07-26 12:10:51.817177] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:03.944 [2024-07-26 12:10:51.817525] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:03.944 [2024-07-26 12:10:51.817539] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:03.944 12:10:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.944 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.944 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:03.944 12:10:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.944 12:10:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.944 [2024-07-26 12:10:51.825317] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:03.944 [2024-07-26 12:10:51.857597] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:03.944 [2024-07-26 12:10:51.859092] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:03.944 [2024-07-26 12:10:51.864265] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:03.944 [2024-07-26 12:10:51.864568] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:03.944 [2024-07-26 12:10:51.864587] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:03.944 12:10:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.944 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:03.944 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:03.944 12:10:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.944 12:10:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.944 [2024-07-26 12:10:51.880281] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:03.944 [2024-07-26 12:10:51.919610] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:04.202 [2024-07-26 12:10:51.922537] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:04.202 [2024-07-26 12:10:51.929153] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:04.202 [2024-07-26 12:10:51.929551] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:04.202 [2024-07-26 12:10:51.929572] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:04.202 12:10:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.202 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:04.202 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:04.202 12:10:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.202 12:10:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.202 [2024-07-26 12:10:51.936332] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:04.202 [2024-07-26 12:10:51.983203] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:04.202 [2024-07-26 12:10:51.984324] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:04.202 [2024-07-26 12:10:51.991295] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:04.202 [2024-07-26 12:10:51.991588] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:04.202 [2024-07-26 12:10:51.991607] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:04.202 12:10:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.202 12:10:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:04.460 [2024-07-26 12:10:52.183269] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:17:04.460 [2024-07-26 12:10:52.191139] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:17:04.460 [2024-07-26 12:10:52.191202] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:04.460 12:10:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:04.460 12:10:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:04.460 12:10:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:04.460 12:10:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.460 12:10:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:04.718 12:10:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.718 12:10:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:04.718 12:10:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:04.718 12:10:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.718 12:10:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.283 12:10:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.283 12:10:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.283 12:10:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:05.283 12:10:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.283 12:10:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.541 12:10:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.541 12:10:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:05.541 12:10:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:05.541 12:10:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.541 12:10:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.799 12:10:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.799 12:10:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:05.799 12:10:53 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:05.799 12:10:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.799 12:10:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:05.799 12:10:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.799 12:10:53 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:05.799 12:10:53 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:06.057 12:10:53 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:06.057 12:10:53 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:06.058 12:10:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.058 12:10:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.058 12:10:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.058 12:10:53 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:06.058 12:10:53 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:06.058 12:10:53 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:06.058 00:17:06.058 real 0m4.469s 00:17:06.058 user 0m0.970s 00:17:06.058 sys 0m0.232s 00:17:06.058 12:10:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:06.058 12:10:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:06.058 ************************************ 00:17:06.058 END TEST test_create_multi_ublk 00:17:06.058 ************************************ 00:17:06.058 12:10:53 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:06.058 12:10:53 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:06.058 12:10:53 ublk -- ublk/ublk.sh@130 -- # killprocess 76661 00:17:06.058 12:10:53 ublk -- common/autotest_common.sh@950 -- # '[' -z 76661 ']' 00:17:06.058 12:10:53 ublk -- common/autotest_common.sh@954 -- # kill -0 76661 00:17:06.058 12:10:53 ublk -- common/autotest_common.sh@955 -- # uname 00:17:06.058 12:10:53 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:06.058 12:10:53 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76661 00:17:06.058 12:10:53 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:06.058 killing process with pid 76661 00:17:06.058 12:10:53 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:06.058 12:10:53 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76661' 00:17:06.058 12:10:53 ublk -- common/autotest_common.sh@969 -- # kill 76661 00:17:06.058 12:10:53 ublk -- common/autotest_common.sh@974 -- # wait 76661 00:17:07.436 [2024-07-26 12:10:55.131251] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:17:07.436 [2024-07-26 12:10:55.131333] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:17:08.814 00:17:08.814 real 0m30.127s 00:17:08.814 user 0m44.838s 00:17:08.814 sys 0m8.279s 00:17:08.814 12:10:56 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:08.814 12:10:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:08.814 ************************************ 00:17:08.814 END TEST ublk 00:17:08.814 ************************************ 00:17:08.814 12:10:56 -- spdk/autotest.sh@256 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:08.814 12:10:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:08.814 12:10:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:08.814 12:10:56 -- common/autotest_common.sh@10 -- # set +x 00:17:08.814 ************************************ 00:17:08.814 START TEST ublk_recovery 00:17:08.814 ************************************ 00:17:08.814 12:10:56 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:08.814 * Looking for test storage... 00:17:08.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:08.814 12:10:56 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:08.814 12:10:56 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:08.814 12:10:56 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:08.814 12:10:56 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:08.814 12:10:56 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:08.814 12:10:56 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:08.814 12:10:56 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:08.814 12:10:56 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:08.814 12:10:56 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:08.814 12:10:56 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:08.814 12:10:56 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=77056 00:17:08.814 12:10:56 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:08.814 12:10:56 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:08.814 12:10:56 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 77056 00:17:08.814 12:10:56 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 77056 ']' 00:17:08.814 12:10:56 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.814 12:10:56 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:08.814 12:10:56 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.814 12:10:56 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:08.814 12:10:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:08.815 [2024-07-26 12:10:56.772041] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:17:08.815 [2024-07-26 12:10:56.772175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77056 ] 00:17:09.072 [2024-07-26 12:10:56.943767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:09.332 [2024-07-26 12:10:57.170807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.332 [2024-07-26 12:10:57.170840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.268 12:10:58 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:10.268 12:10:58 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:17:10.268 12:10:58 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:10.268 12:10:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.268 12:10:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.268 [2024-07-26 12:10:58.075139] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:10.268 [2024-07-26 12:10:58.078147] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:10.268 12:10:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.268 12:10:58 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:10.268 12:10:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.268 12:10:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.528 malloc0 00:17:10.528 12:10:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.528 12:10:58 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:10.528 12:10:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.528 12:10:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.528 [2024-07-26 12:10:58.259299] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:10.528 [2024-07-26 12:10:58.259421] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:10.528 [2024-07-26 12:10:58.259432] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:10.528 [2024-07-26 12:10:58.259443] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:10.528 [2024-07-26 12:10:58.268286] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:10.528 [2024-07-26 12:10:58.268321] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:10.528 [2024-07-26 12:10:58.275150] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:10.528 [2024-07-26 12:10:58.275298] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:10.528 [2024-07-26 12:10:58.286154] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:10.528 1 00:17:10.528 12:10:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.528 12:10:58 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:11.466 12:10:59 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=77097 00:17:11.466 12:10:59 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:11.466 12:10:59 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:11.466 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:11.466 fio-3.35 00:17:11.466 Starting 1 process 00:17:16.735 12:11:04 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 77056 00:17:16.735 12:11:04 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:22.029 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 77056 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:22.029 12:11:09 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=77207 00:17:22.029 12:11:09 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:22.029 12:11:09 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:22.029 12:11:09 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 77207 00:17:22.029 12:11:09 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 77207 ']' 00:17:22.029 12:11:09 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.029 12:11:09 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:22.029 12:11:09 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.029 12:11:09 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:22.029 12:11:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.029 [2024-07-26 12:11:09.411477] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:17:22.029 [2024-07-26 12:11:09.411651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77207 ] 00:17:22.029 [2024-07-26 12:11:09.583298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:22.029 [2024-07-26 12:11:09.820644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.029 [2024-07-26 12:11:09.820678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.962 12:11:10 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:22.962 12:11:10 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:17:22.962 12:11:10 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:22.962 12:11:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.962 12:11:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.962 [2024-07-26 12:11:10.765172] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:22.962 [2024-07-26 12:11:10.768046] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:22.962 12:11:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.962 12:11:10 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:22.962 12:11:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.962 12:11:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.962 malloc0 00:17:22.962 12:11:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.962 12:11:10 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:22.962 12:11:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.962 12:11:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.220 [2024-07-26 12:11:10.949309] ublk.c:2077:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:23.220 [2024-07-26 12:11:10.949362] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:23.220 [2024-07-26 12:11:10.949372] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:23.220 [2024-07-26 12:11:10.957194] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:23.220 [2024-07-26 12:11:10.957225] ublk.c:2006:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:23.220 [2024-07-26 12:11:10.957324] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:23.220 1 00:17:23.220 12:11:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.220 12:11:10 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 77097 00:17:23.220 [2024-07-26 12:11:10.965154] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:23.220 [2024-07-26 12:11:10.968929] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:23.220 [2024-07-26 12:11:10.972370] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:23.220 [2024-07-26 12:11:10.972402] ublk.c: 379:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:19.472 00:18:19.472 fio_test: (groupid=0, jobs=1): err= 0: pid=77100: Fri Jul 26 12:11:59 2024 00:18:19.472 read: IOPS=22.3k, BW=87.1MiB/s (91.3MB/s)(5224MiB/60002msec) 00:18:19.472 slat (nsec): min=1911, max=445558, avg=7144.94, stdev=2322.64 00:18:19.472 clat (usec): min=1324, max=6676.1k, avg=2868.77, stdev=49298.95 00:18:19.472 lat (usec): min=1330, max=6676.2k, avg=2875.91, stdev=49298.97 00:18:19.472 clat percentiles (usec): 00:18:19.472 | 1.00th=[ 1975], 5.00th=[ 2147], 10.00th=[ 2212], 20.00th=[ 2278], 00:18:19.472 | 30.00th=[ 2311], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2376], 00:18:19.472 | 70.00th=[ 2442], 80.00th=[ 2474], 90.00th=[ 2835], 95.00th=[ 3720], 00:18:19.472 | 99.00th=[ 5014], 99.50th=[ 5669], 99.90th=[ 6980], 99.95th=[ 8356], 00:18:19.472 | 99.99th=[12780] 00:18:19.472 bw ( KiB/s): min= 5556, max=105808, per=100.00%, avg=99162.62, stdev=12308.38, samples=107 00:18:19.472 iops : min= 1389, max=26452, avg=24790.62, stdev=3077.10, samples=107 00:18:19.472 write: IOPS=22.3k, BW=86.9MiB/s (91.2MB/s)(5217MiB/60002msec); 0 zone resets 00:18:19.472 slat (usec): min=2, max=383, avg= 7.18, stdev= 2.34 00:18:19.472 clat (usec): min=1335, max=6676.1k, avg=2862.41, stdev=42820.85 00:18:19.472 lat (usec): min=1342, max=6676.1k, avg=2869.59, stdev=42820.87 00:18:19.472 clat percentiles (usec): 00:18:19.472 | 1.00th=[ 1975], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2376], 00:18:19.472 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:18:19.472 | 70.00th=[ 2540], 80.00th=[ 2606], 90.00th=[ 2835], 95.00th=[ 3720], 00:18:19.472 | 99.00th=[ 5080], 99.50th=[ 5735], 99.90th=[ 7111], 99.95th=[ 8291], 00:18:19.472 | 99.99th=[12911] 00:18:19.472 bw ( KiB/s): min= 5229, max=105632, per=100.00%, avg=99049.69, stdev=12188.79, samples=107 00:18:19.472 iops : min= 1307, max=26408, avg=24762.38, stdev=3047.23, samples=107 00:18:19.472 lat (msec) : 2=1.29%, 4=94.89%, 10=3.79%, 20=0.02%, >=2000=0.01% 00:18:19.472 cpu : usr=11.97%, sys=31.01%, ctx=112497, majf=0, minf=13 00:18:19.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:19.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:19.472 issued rwts: total=1337270,1335555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:19.472 00:18:19.472 Run status group 0 (all jobs): 00:18:19.472 READ: bw=87.1MiB/s (91.3MB/s), 87.1MiB/s-87.1MiB/s (91.3MB/s-91.3MB/s), io=5224MiB (5477MB), run=60002-60002msec 00:18:19.472 WRITE: bw=86.9MiB/s (91.2MB/s), 86.9MiB/s-86.9MiB/s (91.2MB/s-91.2MB/s), io=5217MiB (5470MB), run=60002-60002msec 00:18:19.472 00:18:19.472 Disk stats (read/write): 00:18:19.472 ublkb1: ios=1334461/1332979, merge=0/0, ticks=3724033/3577476, in_queue=7301510, util=99.94% 00:18:19.472 12:11:59 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.472 [2024-07-26 12:11:59.572143] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:19.472 [2024-07-26 12:11:59.609207] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:19.472 [2024-07-26 12:11:59.610199] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:19.472 [2024-07-26 12:11:59.617182] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:19.472 [2024-07-26 12:11:59.617312] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:19.472 [2024-07-26 12:11:59.617325] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.472 12:11:59 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.472 [2024-07-26 12:11:59.632265] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:18:19.472 [2024-07-26 12:11:59.642170] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:18:19.472 [2024-07-26 12:11:59.642232] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.472 12:11:59 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:19.472 12:11:59 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:19.472 12:11:59 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 77207 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 77207 ']' 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 77207 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77207 00:18:19.472 killing process with pid 77207 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77207' 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@969 -- # kill 77207 00:18:19.472 12:11:59 ublk_recovery -- common/autotest_common.sh@974 -- # wait 77207 00:18:19.472 [2024-07-26 12:12:00.853309] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:18:19.472 [2024-07-26 12:12:00.853367] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:18:19.472 00:18:19.472 real 1m5.853s 00:18:19.472 user 1m49.524s 00:18:19.472 sys 0m36.719s 00:18:19.472 ************************************ 00:18:19.472 END TEST ublk_recovery 00:18:19.472 ************************************ 00:18:19.472 12:12:02 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:19.472 12:12:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:19.472 12:12:02 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:18:19.472 12:12:02 -- spdk/autotest.sh@264 -- # timing_exit lib 00:18:19.472 12:12:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:19.472 12:12:02 -- common/autotest_common.sh@10 -- # set +x 00:18:19.472 12:12:02 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:18:19.472 12:12:02 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:18:19.472 12:12:02 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:18:19.472 12:12:02 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:18:19.472 12:12:02 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:18:19.472 12:12:02 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:18:19.472 12:12:02 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:18:19.472 12:12:02 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:18:19.472 12:12:02 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:18:19.472 12:12:02 -- spdk/autotest.sh@343 -- # '[' 1 -eq 1 ']' 00:18:19.472 12:12:02 -- spdk/autotest.sh@344 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:19.472 12:12:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:19.472 12:12:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:19.472 12:12:02 -- common/autotest_common.sh@10 -- # set +x 00:18:19.472 ************************************ 00:18:19.472 START TEST ftl 00:18:19.472 ************************************ 00:18:19.472 12:12:02 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:19.472 * Looking for test storage... 00:18:19.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:19.472 12:12:02 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:19.472 12:12:02 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:19.472 12:12:02 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:19.472 12:12:02 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:19.472 12:12:02 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:19.472 12:12:02 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:19.472 12:12:02 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.472 12:12:02 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:19.473 12:12:02 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:19.473 12:12:02 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.473 12:12:02 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.473 12:12:02 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:19.473 12:12:02 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:19.473 12:12:02 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:19.473 12:12:02 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:19.473 12:12:02 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:19.473 12:12:02 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:19.473 12:12:02 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.473 12:12:02 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:19.473 12:12:02 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:19.473 12:12:02 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:19.473 12:12:02 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:19.473 12:12:02 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:19.473 12:12:02 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:19.473 12:12:02 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:19.473 12:12:02 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:19.473 12:12:02 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:19.473 12:12:02 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:19.473 12:12:02 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:19.473 12:12:02 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.473 12:12:02 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:19.473 12:12:02 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:19.473 12:12:02 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:19.473 12:12:02 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:19.473 12:12:02 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:19.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:19.473 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:19.473 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:19.473 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:19.473 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:19.473 12:12:03 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77999 00:18:19.473 12:12:03 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:19.473 12:12:03 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77999 00:18:19.473 12:12:03 ftl -- common/autotest_common.sh@831 -- # '[' -z 77999 ']' 00:18:19.473 12:12:03 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.473 12:12:03 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.473 12:12:03 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.473 12:12:03 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.473 12:12:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:19.473 [2024-07-26 12:12:03.605411] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:18:19.473 [2024-07-26 12:12:03.605561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77999 ] 00:18:19.473 [2024-07-26 12:12:03.777381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.473 [2024-07-26 12:12:04.019445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.473 12:12:04 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.473 12:12:04 ftl -- common/autotest_common.sh@864 -- # return 0 00:18:19.473 12:12:04 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:19.473 12:12:04 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:19.473 12:12:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:19.473 12:12:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@50 -- # break 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@63 -- # break 00:18:19.473 12:12:06 ftl -- ftl/ftl.sh@66 -- # killprocess 77999 00:18:19.473 12:12:06 ftl -- common/autotest_common.sh@950 -- # '[' -z 77999 ']' 00:18:19.473 12:12:06 ftl -- common/autotest_common.sh@954 -- # kill -0 77999 00:18:19.473 12:12:06 ftl -- common/autotest_common.sh@955 -- # uname 00:18:19.473 12:12:06 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:19.473 12:12:06 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77999 00:18:19.473 12:12:06 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:19.473 12:12:06 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:19.473 killing process with pid 77999 00:18:19.473 12:12:06 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77999' 00:18:19.473 12:12:06 ftl -- common/autotest_common.sh@969 -- # kill 77999 00:18:19.473 12:12:06 ftl -- common/autotest_common.sh@974 -- # wait 77999 00:18:21.376 12:12:09 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:21.376 12:12:09 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:21.376 12:12:09 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:21.376 12:12:09 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:21.376 12:12:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:21.376 ************************************ 00:18:21.376 START TEST ftl_fio_basic 00:18:21.376 ************************************ 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:21.376 * Looking for test storage... 00:18:21.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=78140 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 78140 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 78140 ']' 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:21.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:21.376 12:12:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:21.635 [2024-07-26 12:12:09.379705] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:18:21.635 [2024-07-26 12:12:09.379845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78140 ] 00:18:21.635 [2024-07-26 12:12:09.551191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:21.894 [2024-07-26 12:12:09.787174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.894 [2024-07-26 12:12:09.787291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.894 [2024-07-26 12:12:09.787328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.831 12:12:10 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:22.831 12:12:10 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:18:22.831 12:12:10 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:22.831 12:12:10 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:22.831 12:12:10 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:22.831 12:12:10 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:22.831 12:12:10 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:22.831 12:12:10 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:23.090 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:23.090 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:23.090 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:23.090 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:23.090 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:23.090 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:18:23.090 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:18:23.090 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:23.349 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:23.349 { 00:18:23.349 "name": "nvme0n1", 00:18:23.349 "aliases": [ 00:18:23.349 "8d5a691d-d990-4ceb-bd06-6b205a00d904" 00:18:23.349 ], 00:18:23.349 "product_name": "NVMe disk", 00:18:23.349 "block_size": 4096, 00:18:23.349 "num_blocks": 1310720, 00:18:23.349 "uuid": "8d5a691d-d990-4ceb-bd06-6b205a00d904", 00:18:23.350 "assigned_rate_limits": { 00:18:23.350 "rw_ios_per_sec": 0, 00:18:23.350 "rw_mbytes_per_sec": 0, 00:18:23.350 "r_mbytes_per_sec": 0, 00:18:23.350 "w_mbytes_per_sec": 0 00:18:23.350 }, 00:18:23.350 "claimed": false, 00:18:23.350 "zoned": false, 00:18:23.350 "supported_io_types": { 00:18:23.350 "read": true, 00:18:23.350 "write": true, 00:18:23.350 "unmap": true, 00:18:23.350 "flush": true, 00:18:23.350 "reset": true, 00:18:23.350 "nvme_admin": true, 00:18:23.350 "nvme_io": true, 00:18:23.350 "nvme_io_md": false, 00:18:23.350 "write_zeroes": true, 00:18:23.350 "zcopy": false, 00:18:23.350 "get_zone_info": false, 00:18:23.350 "zone_management": false, 00:18:23.350 "zone_append": false, 00:18:23.350 "compare": true, 00:18:23.350 "compare_and_write": false, 00:18:23.350 "abort": true, 00:18:23.350 "seek_hole": false, 00:18:23.350 "seek_data": false, 00:18:23.350 "copy": true, 00:18:23.350 "nvme_iov_md": false 00:18:23.350 }, 00:18:23.350 "driver_specific": { 00:18:23.350 "nvme": [ 00:18:23.350 { 00:18:23.350 "pci_address": "0000:00:11.0", 00:18:23.350 "trid": { 00:18:23.350 "trtype": "PCIe", 00:18:23.350 "traddr": "0000:00:11.0" 00:18:23.350 }, 00:18:23.350 "ctrlr_data": { 00:18:23.350 "cntlid": 0, 00:18:23.350 "vendor_id": "0x1b36", 00:18:23.350 "model_number": "QEMU NVMe Ctrl", 00:18:23.350 "serial_number": "12341", 00:18:23.350 "firmware_revision": "8.0.0", 00:18:23.350 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:23.350 "oacs": { 00:18:23.350 "security": 0, 00:18:23.350 "format": 1, 00:18:23.350 "firmware": 0, 00:18:23.350 "ns_manage": 1 00:18:23.350 }, 00:18:23.350 "multi_ctrlr": false, 00:18:23.350 "ana_reporting": false 00:18:23.350 }, 00:18:23.350 "vs": { 00:18:23.350 "nvme_version": "1.4" 00:18:23.350 }, 00:18:23.350 "ns_data": { 00:18:23.350 "id": 1, 00:18:23.350 "can_share": false 00:18:23.350 } 00:18:23.350 } 00:18:23.350 ], 00:18:23.350 "mp_policy": "active_passive" 00:18:23.350 } 00:18:23.350 } 00:18:23.350 ]' 00:18:23.350 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:23.350 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:18:23.350 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:23.350 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:23.350 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:23.350 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:18:23.350 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:23.350 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:23.350 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:23.350 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:23.350 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:23.608 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:23.608 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:23.867 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=96802dd9-32a2-42ce-a8fd-a3064ff2b855 00:18:23.867 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 96802dd9-32a2-42ce-a8fd-a3064ff2b855 00:18:24.126 12:12:11 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=ca1213b9-94cd-48fe-8e3f-f348f660abbb 00:18:24.126 12:12:11 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ca1213b9-94cd-48fe-8e3f-f348f660abbb 00:18:24.126 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:24.126 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:24.126 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=ca1213b9-94cd-48fe-8e3f-f348f660abbb 00:18:24.126 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:24.126 12:12:11 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size ca1213b9-94cd-48fe-8e3f-f348f660abbb 00:18:24.126 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=ca1213b9-94cd-48fe-8e3f-f348f660abbb 00:18:24.126 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:24.126 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:18:24.126 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:18:24.126 12:12:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca1213b9-94cd-48fe-8e3f-f348f660abbb 00:18:24.386 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:24.386 { 00:18:24.386 "name": "ca1213b9-94cd-48fe-8e3f-f348f660abbb", 00:18:24.386 "aliases": [ 00:18:24.386 "lvs/nvme0n1p0" 00:18:24.386 ], 00:18:24.386 "product_name": "Logical Volume", 00:18:24.386 "block_size": 4096, 00:18:24.386 "num_blocks": 26476544, 00:18:24.386 "uuid": "ca1213b9-94cd-48fe-8e3f-f348f660abbb", 00:18:24.386 "assigned_rate_limits": { 00:18:24.386 "rw_ios_per_sec": 0, 00:18:24.386 "rw_mbytes_per_sec": 0, 00:18:24.386 "r_mbytes_per_sec": 0, 00:18:24.386 "w_mbytes_per_sec": 0 00:18:24.386 }, 00:18:24.386 "claimed": false, 00:18:24.386 "zoned": false, 00:18:24.386 "supported_io_types": { 00:18:24.386 "read": true, 00:18:24.386 "write": true, 00:18:24.386 "unmap": true, 00:18:24.386 "flush": false, 00:18:24.386 "reset": true, 00:18:24.386 "nvme_admin": false, 00:18:24.386 "nvme_io": false, 00:18:24.386 "nvme_io_md": false, 00:18:24.386 "write_zeroes": true, 00:18:24.386 "zcopy": false, 00:18:24.386 "get_zone_info": false, 00:18:24.386 "zone_management": false, 00:18:24.386 "zone_append": false, 00:18:24.386 "compare": false, 00:18:24.386 "compare_and_write": false, 00:18:24.386 "abort": false, 00:18:24.386 "seek_hole": true, 00:18:24.386 "seek_data": true, 00:18:24.386 "copy": false, 00:18:24.386 "nvme_iov_md": false 00:18:24.386 }, 00:18:24.386 "driver_specific": { 00:18:24.386 "lvol": { 00:18:24.386 "lvol_store_uuid": "96802dd9-32a2-42ce-a8fd-a3064ff2b855", 00:18:24.386 "base_bdev": "nvme0n1", 00:18:24.386 "thin_provision": true, 00:18:24.386 "num_allocated_clusters": 0, 00:18:24.386 "snapshot": false, 00:18:24.386 "clone": false, 00:18:24.386 "esnap_clone": false 00:18:24.386 } 00:18:24.386 } 00:18:24.386 } 00:18:24.386 ]' 00:18:24.386 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:24.386 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:18:24.386 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:24.386 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:24.386 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:24.386 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:18:24.386 12:12:12 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:24.386 12:12:12 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:24.386 12:12:12 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:24.652 12:12:12 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:24.652 12:12:12 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:24.652 12:12:12 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size ca1213b9-94cd-48fe-8e3f-f348f660abbb 00:18:24.652 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=ca1213b9-94cd-48fe-8e3f-f348f660abbb 00:18:24.652 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:24.652 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:18:24.652 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:18:24.652 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca1213b9-94cd-48fe-8e3f-f348f660abbb 00:18:24.911 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:24.911 { 00:18:24.911 "name": "ca1213b9-94cd-48fe-8e3f-f348f660abbb", 00:18:24.911 "aliases": [ 00:18:24.911 "lvs/nvme0n1p0" 00:18:24.911 ], 00:18:24.911 "product_name": "Logical Volume", 00:18:24.911 "block_size": 4096, 00:18:24.911 "num_blocks": 26476544, 00:18:24.911 "uuid": "ca1213b9-94cd-48fe-8e3f-f348f660abbb", 00:18:24.911 "assigned_rate_limits": { 00:18:24.911 "rw_ios_per_sec": 0, 00:18:24.911 "rw_mbytes_per_sec": 0, 00:18:24.911 "r_mbytes_per_sec": 0, 00:18:24.911 "w_mbytes_per_sec": 0 00:18:24.911 }, 00:18:24.911 "claimed": false, 00:18:24.911 "zoned": false, 00:18:24.911 "supported_io_types": { 00:18:24.911 "read": true, 00:18:24.911 "write": true, 00:18:24.911 "unmap": true, 00:18:24.911 "flush": false, 00:18:24.911 "reset": true, 00:18:24.911 "nvme_admin": false, 00:18:24.911 "nvme_io": false, 00:18:24.911 "nvme_io_md": false, 00:18:24.911 "write_zeroes": true, 00:18:24.911 "zcopy": false, 00:18:24.911 "get_zone_info": false, 00:18:24.911 "zone_management": false, 00:18:24.911 "zone_append": false, 00:18:24.911 "compare": false, 00:18:24.911 "compare_and_write": false, 00:18:24.911 "abort": false, 00:18:24.911 "seek_hole": true, 00:18:24.911 "seek_data": true, 00:18:24.911 "copy": false, 00:18:24.911 "nvme_iov_md": false 00:18:24.911 }, 00:18:24.911 "driver_specific": { 00:18:24.911 "lvol": { 00:18:24.911 "lvol_store_uuid": "96802dd9-32a2-42ce-a8fd-a3064ff2b855", 00:18:24.911 "base_bdev": "nvme0n1", 00:18:24.911 "thin_provision": true, 00:18:24.911 "num_allocated_clusters": 0, 00:18:24.911 "snapshot": false, 00:18:24.911 "clone": false, 00:18:24.911 "esnap_clone": false 00:18:24.911 } 00:18:24.911 } 00:18:24.911 } 00:18:24.911 ]' 00:18:24.911 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:24.911 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:18:24.911 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:24.911 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:24.911 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:24.911 12:12:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:18:24.911 12:12:12 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:24.911 12:12:12 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:25.169 12:12:13 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:25.169 12:12:13 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:25.169 12:12:13 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:25.169 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:25.169 12:12:13 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size ca1213b9-94cd-48fe-8e3f-f348f660abbb 00:18:25.169 12:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=ca1213b9-94cd-48fe-8e3f-f348f660abbb 00:18:25.169 12:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:25.169 12:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:18:25.169 12:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:18:25.169 12:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca1213b9-94cd-48fe-8e3f-f348f660abbb 00:18:25.427 12:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:25.427 { 00:18:25.427 "name": "ca1213b9-94cd-48fe-8e3f-f348f660abbb", 00:18:25.427 "aliases": [ 00:18:25.427 "lvs/nvme0n1p0" 00:18:25.427 ], 00:18:25.427 "product_name": "Logical Volume", 00:18:25.427 "block_size": 4096, 00:18:25.427 "num_blocks": 26476544, 00:18:25.427 "uuid": "ca1213b9-94cd-48fe-8e3f-f348f660abbb", 00:18:25.427 "assigned_rate_limits": { 00:18:25.427 "rw_ios_per_sec": 0, 00:18:25.427 "rw_mbytes_per_sec": 0, 00:18:25.427 "r_mbytes_per_sec": 0, 00:18:25.427 "w_mbytes_per_sec": 0 00:18:25.427 }, 00:18:25.427 "claimed": false, 00:18:25.427 "zoned": false, 00:18:25.427 "supported_io_types": { 00:18:25.427 "read": true, 00:18:25.427 "write": true, 00:18:25.427 "unmap": true, 00:18:25.427 "flush": false, 00:18:25.427 "reset": true, 00:18:25.427 "nvme_admin": false, 00:18:25.427 "nvme_io": false, 00:18:25.427 "nvme_io_md": false, 00:18:25.427 "write_zeroes": true, 00:18:25.427 "zcopy": false, 00:18:25.427 "get_zone_info": false, 00:18:25.427 "zone_management": false, 00:18:25.427 "zone_append": false, 00:18:25.427 "compare": false, 00:18:25.427 "compare_and_write": false, 00:18:25.427 "abort": false, 00:18:25.427 "seek_hole": true, 00:18:25.427 "seek_data": true, 00:18:25.427 "copy": false, 00:18:25.427 "nvme_iov_md": false 00:18:25.427 }, 00:18:25.427 "driver_specific": { 00:18:25.427 "lvol": { 00:18:25.427 "lvol_store_uuid": "96802dd9-32a2-42ce-a8fd-a3064ff2b855", 00:18:25.427 "base_bdev": "nvme0n1", 00:18:25.427 "thin_provision": true, 00:18:25.427 "num_allocated_clusters": 0, 00:18:25.427 "snapshot": false, 00:18:25.427 "clone": false, 00:18:25.427 "esnap_clone": false 00:18:25.427 } 00:18:25.427 } 00:18:25.427 } 00:18:25.427 ]' 00:18:25.427 12:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:25.427 12:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:18:25.427 12:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:25.427 12:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:25.427 12:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:25.427 12:12:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:18:25.427 12:12:13 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:25.427 12:12:13 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:25.427 12:12:13 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ca1213b9-94cd-48fe-8e3f-f348f660abbb -c nvc0n1p0 --l2p_dram_limit 60 00:18:25.685 [2024-07-26 12:12:13.452326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-07-26 12:12:13.452383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:25.685 [2024-07-26 12:12:13.452413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:25.685 [2024-07-26 12:12:13.452428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-07-26 12:12:13.452504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-07-26 12:12:13.452518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:25.685 [2024-07-26 12:12:13.452530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:25.685 [2024-07-26 12:12:13.452542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.685 [2024-07-26 12:12:13.452571] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:25.685 [2024-07-26 12:12:13.453749] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:25.685 [2024-07-26 12:12:13.453779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.685 [2024-07-26 12:12:13.453796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:25.685 [2024-07-26 12:12:13.453807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:18:25.685 [2024-07-26 12:12:13.453820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.686 [2024-07-26 12:12:13.453907] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 12c8052f-b757-4b00-bd4c-e0f1752fa489 00:18:25.686 [2024-07-26 12:12:13.455320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.686 [2024-07-26 12:12:13.455340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:25.686 [2024-07-26 12:12:13.455354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:25.686 [2024-07-26 12:12:13.455365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.686 [2024-07-26 12:12:13.462811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.686 [2024-07-26 12:12:13.462840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:25.686 [2024-07-26 12:12:13.462858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.387 ms 00:18:25.686 [2024-07-26 12:12:13.462868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.686 [2024-07-26 12:12:13.462992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.686 [2024-07-26 12:12:13.463007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:25.686 [2024-07-26 12:12:13.463020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:25.686 [2024-07-26 12:12:13.463030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.686 [2024-07-26 12:12:13.463128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.686 [2024-07-26 12:12:13.463141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:25.686 [2024-07-26 12:12:13.463155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:25.686 [2024-07-26 12:12:13.463167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.686 [2024-07-26 12:12:13.463212] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:25.686 [2024-07-26 12:12:13.468746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.686 [2024-07-26 12:12:13.468786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:25.686 [2024-07-26 12:12:13.468798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.560 ms 00:18:25.686 [2024-07-26 12:12:13.468811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.686 [2024-07-26 12:12:13.468860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.686 [2024-07-26 12:12:13.468873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:25.686 [2024-07-26 12:12:13.468884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:25.686 [2024-07-26 12:12:13.468896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.686 [2024-07-26 12:12:13.468942] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:25.686 [2024-07-26 12:12:13.469092] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:25.686 [2024-07-26 12:12:13.469108] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:25.686 [2024-07-26 12:12:13.469139] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:25.686 [2024-07-26 12:12:13.469153] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:25.686 [2024-07-26 12:12:13.469169] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:25.686 [2024-07-26 12:12:13.469180] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:25.686 [2024-07-26 12:12:13.469195] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:25.686 [2024-07-26 12:12:13.469208] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:25.686 [2024-07-26 12:12:13.469220] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:25.686 [2024-07-26 12:12:13.469230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.686 [2024-07-26 12:12:13.469243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:25.686 [2024-07-26 12:12:13.469253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:18:25.686 [2024-07-26 12:12:13.469265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.686 [2024-07-26 12:12:13.469346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.686 [2024-07-26 12:12:13.469359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:25.686 [2024-07-26 12:12:13.469369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:25.686 [2024-07-26 12:12:13.469381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.686 [2024-07-26 12:12:13.469488] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:25.686 [2024-07-26 12:12:13.469527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:25.686 [2024-07-26 12:12:13.469541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:25.686 [2024-07-26 12:12:13.469557] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.686 [2024-07-26 12:12:13.469570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:25.686 [2024-07-26 12:12:13.469584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:25.686 [2024-07-26 12:12:13.469596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:25.686 [2024-07-26 12:12:13.469611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:25.686 [2024-07-26 12:12:13.469631] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:25.686 [2024-07-26 12:12:13.469645] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:25.686 [2024-07-26 12:12:13.469654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:25.686 [2024-07-26 12:12:13.469667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:25.686 [2024-07-26 12:12:13.469677] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:25.686 [2024-07-26 12:12:13.469688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:25.686 [2024-07-26 12:12:13.469697] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:25.686 [2024-07-26 12:12:13.469709] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.686 [2024-07-26 12:12:13.469718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:25.686 [2024-07-26 12:12:13.469732] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:25.686 [2024-07-26 12:12:13.469741] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.686 [2024-07-26 12:12:13.469752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:25.686 [2024-07-26 12:12:13.469762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:25.686 [2024-07-26 12:12:13.469775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.686 [2024-07-26 12:12:13.469784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:25.686 [2024-07-26 12:12:13.469796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:25.686 [2024-07-26 12:12:13.469805] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.686 [2024-07-26 12:12:13.469816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:25.686 [2024-07-26 12:12:13.469825] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:25.686 [2024-07-26 12:12:13.469836] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.686 [2024-07-26 12:12:13.469845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:25.686 [2024-07-26 12:12:13.469857] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:25.686 [2024-07-26 12:12:13.469866] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.686 [2024-07-26 12:12:13.469877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:25.686 [2024-07-26 12:12:13.469886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:25.686 [2024-07-26 12:12:13.469899] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:25.686 [2024-07-26 12:12:13.469908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:25.686 [2024-07-26 12:12:13.469920] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:25.686 [2024-07-26 12:12:13.469928] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:25.686 [2024-07-26 12:12:13.469942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:25.686 [2024-07-26 12:12:13.469951] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:25.686 [2024-07-26 12:12:13.469962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.686 [2024-07-26 12:12:13.469973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:25.686 [2024-07-26 12:12:13.469985] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:25.686 [2024-07-26 12:12:13.469994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.686 [2024-07-26 12:12:13.470005] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:25.686 [2024-07-26 12:12:13.470029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:25.686 [2024-07-26 12:12:13.470055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:25.686 [2024-07-26 12:12:13.470065] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.686 [2024-07-26 12:12:13.470078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:25.686 [2024-07-26 12:12:13.470087] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:25.686 [2024-07-26 12:12:13.470101] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:25.686 [2024-07-26 12:12:13.470111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:25.686 [2024-07-26 12:12:13.470133] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:25.686 [2024-07-26 12:12:13.470143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:25.686 [2024-07-26 12:12:13.470159] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:25.686 [2024-07-26 12:12:13.470172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:25.686 [2024-07-26 12:12:13.470189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:25.686 [2024-07-26 12:12:13.470200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:25.686 [2024-07-26 12:12:13.470215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:25.687 [2024-07-26 12:12:13.470226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:25.687 [2024-07-26 12:12:13.470238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:25.687 [2024-07-26 12:12:13.470248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:25.687 [2024-07-26 12:12:13.470261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:25.687 [2024-07-26 12:12:13.470271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:25.687 [2024-07-26 12:12:13.470284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:25.687 [2024-07-26 12:12:13.470294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:25.687 [2024-07-26 12:12:13.470309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:25.687 [2024-07-26 12:12:13.470319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:25.687 [2024-07-26 12:12:13.470332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:25.687 [2024-07-26 12:12:13.470342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:25.687 [2024-07-26 12:12:13.470358] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:25.687 [2024-07-26 12:12:13.470369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:25.687 [2024-07-26 12:12:13.470383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:25.687 [2024-07-26 12:12:13.470394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:25.687 [2024-07-26 12:12:13.470407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:25.687 [2024-07-26 12:12:13.470417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:25.687 [2024-07-26 12:12:13.470431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.687 [2024-07-26 12:12:13.470441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:25.687 [2024-07-26 12:12:13.470453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:18:25.687 [2024-07-26 12:12:13.470463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.687 [2024-07-26 12:12:13.470539] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:25.687 [2024-07-26 12:12:13.470551] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:29.875 [2024-07-26 12:12:17.362286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.875 [2024-07-26 12:12:17.362351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:29.875 [2024-07-26 12:12:17.362387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3898.059 ms 00:18:29.875 [2024-07-26 12:12:17.362398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.875 [2024-07-26 12:12:17.407964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.875 [2024-07-26 12:12:17.408018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:29.875 [2024-07-26 12:12:17.408037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.328 ms 00:18:29.875 [2024-07-26 12:12:17.408048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.875 [2024-07-26 12:12:17.408221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.875 [2024-07-26 12:12:17.408235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:29.875 [2024-07-26 12:12:17.408248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:29.875 [2024-07-26 12:12:17.408261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.875 [2024-07-26 12:12:17.465869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.875 [2024-07-26 12:12:17.465928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:29.875 [2024-07-26 12:12:17.465952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.622 ms 00:18:29.875 [2024-07-26 12:12:17.465966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.875 [2024-07-26 12:12:17.466046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.875 [2024-07-26 12:12:17.466061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:29.875 [2024-07-26 12:12:17.466080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:29.875 [2024-07-26 12:12:17.466094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.875 [2024-07-26 12:12:17.466663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.875 [2024-07-26 12:12:17.466686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:29.875 [2024-07-26 12:12:17.466706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:18:29.875 [2024-07-26 12:12:17.466719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.875 [2024-07-26 12:12:17.466888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.875 [2024-07-26 12:12:17.466907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:29.875 [2024-07-26 12:12:17.466925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:18:29.875 [2024-07-26 12:12:17.466939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.875 [2024-07-26 12:12:17.492773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.875 [2024-07-26 12:12:17.492825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:29.875 [2024-07-26 12:12:17.492843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.829 ms 00:18:29.875 [2024-07-26 12:12:17.492854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.875 [2024-07-26 12:12:17.507372] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:29.875 [2024-07-26 12:12:17.523860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.875 [2024-07-26 12:12:17.523924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:29.875 [2024-07-26 12:12:17.523939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.917 ms 00:18:29.875 [2024-07-26 12:12:17.523952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.875 [2024-07-26 12:12:17.616548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.875 [2024-07-26 12:12:17.616613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:29.875 [2024-07-26 12:12:17.616630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.686 ms 00:18:29.875 [2024-07-26 12:12:17.616643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.875 [2024-07-26 12:12:17.616892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.875 [2024-07-26 12:12:17.616909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:29.875 [2024-07-26 12:12:17.616921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:18:29.875 [2024-07-26 12:12:17.616937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.875 [2024-07-26 12:12:17.655630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.875 [2024-07-26 12:12:17.655692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:29.875 [2024-07-26 12:12:17.655709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.672 ms 00:18:29.875 [2024-07-26 12:12:17.655722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.875 [2024-07-26 12:12:17.693229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.875 [2024-07-26 12:12:17.693299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:29.875 [2024-07-26 12:12:17.693317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.503 ms 00:18:29.875 [2024-07-26 12:12:17.693330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.875 [2024-07-26 12:12:17.694111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.875 [2024-07-26 12:12:17.694145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:29.875 [2024-07-26 12:12:17.694159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:18:29.875 [2024-07-26 12:12:17.694171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.875 [2024-07-26 12:12:17.829900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.875 [2024-07-26 12:12:17.829974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:29.875 [2024-07-26 12:12:17.830007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 135.863 ms 00:18:29.875 [2024-07-26 12:12:17.830025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.133 [2024-07-26 12:12:17.871498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.133 [2024-07-26 12:12:17.871566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:30.133 [2024-07-26 12:12:17.871583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.492 ms 00:18:30.133 [2024-07-26 12:12:17.871596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.133 [2024-07-26 12:12:17.913530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.133 [2024-07-26 12:12:17.913599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:30.133 [2024-07-26 12:12:17.913614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.955 ms 00:18:30.133 [2024-07-26 12:12:17.913633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.133 [2024-07-26 12:12:17.955280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.133 [2024-07-26 12:12:17.955363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:30.133 [2024-07-26 12:12:17.955380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.672 ms 00:18:30.133 [2024-07-26 12:12:17.955392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.133 [2024-07-26 12:12:17.955447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.133 [2024-07-26 12:12:17.955460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:30.133 [2024-07-26 12:12:17.955472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:30.133 [2024-07-26 12:12:17.955488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.133 [2024-07-26 12:12:17.955641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.133 [2024-07-26 12:12:17.955658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:30.133 [2024-07-26 12:12:17.955668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:30.133 [2024-07-26 12:12:17.955682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.133 [2024-07-26 12:12:17.956815] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4511.293 ms, result 0 00:18:30.133 { 00:18:30.133 "name": "ftl0", 00:18:30.133 "uuid": "12c8052f-b757-4b00-bd4c-e0f1752fa489" 00:18:30.134 } 00:18:30.134 12:12:17 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:30.134 12:12:17 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:18:30.134 12:12:17 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:30.134 12:12:17 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:18:30.134 12:12:17 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:30.134 12:12:17 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:30.134 12:12:17 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:30.392 12:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:30.650 [ 00:18:30.650 { 00:18:30.650 "name": "ftl0", 00:18:30.650 "aliases": [ 00:18:30.650 "12c8052f-b757-4b00-bd4c-e0f1752fa489" 00:18:30.650 ], 00:18:30.650 "product_name": "FTL disk", 00:18:30.650 "block_size": 4096, 00:18:30.650 "num_blocks": 20971520, 00:18:30.650 "uuid": "12c8052f-b757-4b00-bd4c-e0f1752fa489", 00:18:30.650 "assigned_rate_limits": { 00:18:30.650 "rw_ios_per_sec": 0, 00:18:30.650 "rw_mbytes_per_sec": 0, 00:18:30.650 "r_mbytes_per_sec": 0, 00:18:30.650 "w_mbytes_per_sec": 0 00:18:30.650 }, 00:18:30.650 "claimed": false, 00:18:30.650 "zoned": false, 00:18:30.650 "supported_io_types": { 00:18:30.650 "read": true, 00:18:30.650 "write": true, 00:18:30.650 "unmap": true, 00:18:30.650 "flush": true, 00:18:30.650 "reset": false, 00:18:30.650 "nvme_admin": false, 00:18:30.650 "nvme_io": false, 00:18:30.650 "nvme_io_md": false, 00:18:30.650 "write_zeroes": true, 00:18:30.650 "zcopy": false, 00:18:30.650 "get_zone_info": false, 00:18:30.650 "zone_management": false, 00:18:30.650 "zone_append": false, 00:18:30.650 "compare": false, 00:18:30.650 "compare_and_write": false, 00:18:30.650 "abort": false, 00:18:30.650 "seek_hole": false, 00:18:30.650 "seek_data": false, 00:18:30.650 "copy": false, 00:18:30.650 "nvme_iov_md": false 00:18:30.650 }, 00:18:30.650 "driver_specific": { 00:18:30.650 "ftl": { 00:18:30.650 "base_bdev": "ca1213b9-94cd-48fe-8e3f-f348f660abbb", 00:18:30.650 "cache": "nvc0n1p0" 00:18:30.650 } 00:18:30.650 } 00:18:30.650 } 00:18:30.650 ] 00:18:30.650 12:12:18 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:18:30.650 12:12:18 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:30.650 12:12:18 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:30.909 12:12:18 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:30.909 12:12:18 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:30.909 [2024-07-26 12:12:18.804067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.909 [2024-07-26 12:12:18.804130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:30.909 [2024-07-26 12:12:18.804152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:30.909 [2024-07-26 12:12:18.804163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.909 [2024-07-26 12:12:18.804206] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:30.909 [2024-07-26 12:12:18.808313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.909 [2024-07-26 12:12:18.808353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:30.909 [2024-07-26 12:12:18.808367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.095 ms 00:18:30.909 [2024-07-26 12:12:18.808381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.909 [2024-07-26 12:12:18.808885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.909 [2024-07-26 12:12:18.808911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:30.909 [2024-07-26 12:12:18.808923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:18:30.909 [2024-07-26 12:12:18.808940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.909 [2024-07-26 12:12:18.811573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.909 [2024-07-26 12:12:18.811615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:30.909 [2024-07-26 12:12:18.811626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:18:30.909 [2024-07-26 12:12:18.811640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.909 [2024-07-26 12:12:18.816888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.909 [2024-07-26 12:12:18.816927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:30.909 [2024-07-26 12:12:18.816939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.226 ms 00:18:30.909 [2024-07-26 12:12:18.816958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.909 [2024-07-26 12:12:18.858626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.909 [2024-07-26 12:12:18.858695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:30.909 [2024-07-26 12:12:18.858728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.652 ms 00:18:30.909 [2024-07-26 12:12:18.858754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.909 [2024-07-26 12:12:18.884621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.909 [2024-07-26 12:12:18.884708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:30.909 [2024-07-26 12:12:18.884725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.832 ms 00:18:30.909 [2024-07-26 12:12:18.884739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.909 [2024-07-26 12:12:18.885018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.909 [2024-07-26 12:12:18.885036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:30.909 [2024-07-26 12:12:18.885047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:18:30.909 [2024-07-26 12:12:18.885060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.169 [2024-07-26 12:12:18.927419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.169 [2024-07-26 12:12:18.927490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:31.169 [2024-07-26 12:12:18.927507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.397 ms 00:18:31.169 [2024-07-26 12:12:18.927520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.169 [2024-07-26 12:12:18.969309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.169 [2024-07-26 12:12:18.969385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:31.169 [2024-07-26 12:12:18.969406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.767 ms 00:18:31.169 [2024-07-26 12:12:18.969419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.169 [2024-07-26 12:12:19.012055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.169 [2024-07-26 12:12:19.012114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:31.169 [2024-07-26 12:12:19.012155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.618 ms 00:18:31.169 [2024-07-26 12:12:19.012168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.169 [2024-07-26 12:12:19.054558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.169 [2024-07-26 12:12:19.054633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:31.169 [2024-07-26 12:12:19.054650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.280 ms 00:18:31.169 [2024-07-26 12:12:19.054663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.169 [2024-07-26 12:12:19.054763] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:31.169 [2024-07-26 12:12:19.054786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.054995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:31.169 [2024-07-26 12:12:19.055508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.055995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.056010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.056021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.056034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.056045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:31.170 [2024-07-26 12:12:19.056066] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:31.170 [2024-07-26 12:12:19.056077] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 12c8052f-b757-4b00-bd4c-e0f1752fa489 00:18:31.170 [2024-07-26 12:12:19.056090] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:31.170 [2024-07-26 12:12:19.056104] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:31.170 [2024-07-26 12:12:19.056126] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:31.170 [2024-07-26 12:12:19.056137] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:31.170 [2024-07-26 12:12:19.056150] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:31.170 [2024-07-26 12:12:19.056160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:31.170 [2024-07-26 12:12:19.056172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:31.170 [2024-07-26 12:12:19.056181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:31.170 [2024-07-26 12:12:19.056193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:31.170 [2024-07-26 12:12:19.056204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.170 [2024-07-26 12:12:19.056216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:31.170 [2024-07-26 12:12:19.056227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:18:31.170 [2024-07-26 12:12:19.056240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.170 [2024-07-26 12:12:19.077976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.170 [2024-07-26 12:12:19.078047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:31.170 [2024-07-26 12:12:19.078064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.673 ms 00:18:31.170 [2024-07-26 12:12:19.078077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.170 [2024-07-26 12:12:19.078642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.170 [2024-07-26 12:12:19.078664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:31.170 [2024-07-26 12:12:19.078677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:18:31.170 [2024-07-26 12:12:19.078691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.429 [2024-07-26 12:12:19.151951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.429 [2024-07-26 12:12:19.152017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:31.429 [2024-07-26 12:12:19.152033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.429 [2024-07-26 12:12:19.152046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.429 [2024-07-26 12:12:19.152140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.429 [2024-07-26 12:12:19.152156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:31.429 [2024-07-26 12:12:19.152167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.429 [2024-07-26 12:12:19.152179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.429 [2024-07-26 12:12:19.152306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.429 [2024-07-26 12:12:19.152323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:31.429 [2024-07-26 12:12:19.152334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.429 [2024-07-26 12:12:19.152347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.429 [2024-07-26 12:12:19.152378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.429 [2024-07-26 12:12:19.152394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:31.429 [2024-07-26 12:12:19.152405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.429 [2024-07-26 12:12:19.152417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.429 [2024-07-26 12:12:19.308127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.429 [2024-07-26 12:12:19.308208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:31.429 [2024-07-26 12:12:19.308227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.429 [2024-07-26 12:12:19.308243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.687 [2024-07-26 12:12:19.413770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.687 [2024-07-26 12:12:19.413870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:31.687 [2024-07-26 12:12:19.413889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.687 [2024-07-26 12:12:19.413904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.687 [2024-07-26 12:12:19.414103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.687 [2024-07-26 12:12:19.414145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:31.687 [2024-07-26 12:12:19.414158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.687 [2024-07-26 12:12:19.414172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.687 [2024-07-26 12:12:19.414290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.687 [2024-07-26 12:12:19.414311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:31.687 [2024-07-26 12:12:19.414322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.687 [2024-07-26 12:12:19.414336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.687 [2024-07-26 12:12:19.414490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.687 [2024-07-26 12:12:19.414513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:31.687 [2024-07-26 12:12:19.414524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.687 [2024-07-26 12:12:19.414538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.687 [2024-07-26 12:12:19.414603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.687 [2024-07-26 12:12:19.414619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:31.687 [2024-07-26 12:12:19.414631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.687 [2024-07-26 12:12:19.414644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.687 [2024-07-26 12:12:19.414714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.687 [2024-07-26 12:12:19.414729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:31.687 [2024-07-26 12:12:19.414743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.687 [2024-07-26 12:12:19.414757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.687 [2024-07-26 12:12:19.414830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.687 [2024-07-26 12:12:19.414854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:31.687 [2024-07-26 12:12:19.414866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.687 [2024-07-26 12:12:19.414880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.687 [2024-07-26 12:12:19.415141] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 612.001 ms, result 0 00:18:31.687 true 00:18:31.687 12:12:19 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 78140 00:18:31.687 12:12:19 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 78140 ']' 00:18:31.687 12:12:19 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 78140 00:18:31.688 12:12:19 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:18:31.688 12:12:19 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:31.688 12:12:19 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78140 00:18:31.688 12:12:19 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:31.688 killing process with pid 78140 00:18:31.688 12:12:19 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:31.688 12:12:19 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78140' 00:18:31.688 12:12:19 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 78140 00:18:31.688 12:12:19 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 78140 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:36.954 12:12:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:36.954 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:36.954 fio-3.35 00:18:36.954 Starting 1 thread 00:18:42.323 00:18:42.323 test: (groupid=0, jobs=1): err= 0: pid=78365: Fri Jul 26 12:12:29 2024 00:18:42.323 read: IOPS=1011, BW=67.1MiB/s (70.4MB/s)(255MiB/3791msec) 00:18:42.323 slat (nsec): min=4353, max=30743, avg=6456.31, stdev=2537.32 00:18:42.323 clat (usec): min=258, max=1614, avg=445.59, stdev=65.90 00:18:42.323 lat (usec): min=264, max=1620, avg=452.05, stdev=66.20 00:18:42.323 clat percentiles (usec): 00:18:42.323 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 359], 20.00th=[ 396], 00:18:42.323 | 30.00th=[ 404], 40.00th=[ 420], 50.00th=[ 457], 60.00th=[ 465], 00:18:42.323 | 70.00th=[ 474], 80.00th=[ 498], 90.00th=[ 529], 95.00th=[ 545], 00:18:42.323 | 99.00th=[ 594], 99.50th=[ 603], 99.90th=[ 676], 99.95th=[ 758], 00:18:42.323 | 99.99th=[ 1614] 00:18:42.323 write: IOPS=1018, BW=67.6MiB/s (70.9MB/s)(256MiB/3787msec); 0 zone resets 00:18:42.323 slat (nsec): min=15359, max=76539, avg=19186.47, stdev=4369.32 00:18:42.323 clat (usec): min=327, max=981, avg=503.92, stdev=76.64 00:18:42.323 lat (usec): min=350, max=1020, avg=523.11, stdev=76.83 00:18:42.323 clat percentiles (usec): 00:18:42.323 | 1.00th=[ 347], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 433], 00:18:42.323 | 30.00th=[ 478], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 510], 00:18:42.323 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 586], 95.00th=[ 619], 00:18:42.323 | 99.00th=[ 791], 99.50th=[ 848], 99.90th=[ 930], 99.95th=[ 947], 00:18:42.323 | 99.99th=[ 979] 00:18:42.323 bw ( KiB/s): min=64199, max=71400, per=99.84%, avg=69127.86, stdev=2676.41, samples=7 00:18:42.323 iops : min= 944, max= 1050, avg=1016.57, stdev=39.39, samples=7 00:18:42.323 lat (usec) : 500=68.24%, 750=31.06%, 1000=0.69% 00:18:42.323 lat (msec) : 2=0.01% 00:18:42.323 cpu : usr=99.26%, sys=0.08%, ctx=7, majf=0, minf=1169 00:18:42.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:42.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.323 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:42.323 00:18:42.323 Run status group 0 (all jobs): 00:18:42.323 READ: bw=67.1MiB/s (70.4MB/s), 67.1MiB/s-67.1MiB/s (70.4MB/s-70.4MB/s), io=255MiB (267MB), run=3791-3791msec 00:18:42.323 WRITE: bw=67.6MiB/s (70.9MB/s), 67.6MiB/s-67.6MiB/s (70.9MB/s-70.9MB/s), io=256MiB (269MB), run=3787-3787msec 00:18:44.224 ----------------------------------------------------- 00:18:44.224 Suppressions used: 00:18:44.224 count bytes template 00:18:44.224 1 5 /usr/src/fio/parse.c 00:18:44.224 1 8 libtcmalloc_minimal.so 00:18:44.224 1 904 libcrypto.so 00:18:44.224 ----------------------------------------------------- 00:18:44.224 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:44.224 12:12:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:44.224 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:44.224 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:44.224 fio-3.35 00:18:44.224 Starting 2 threads 00:19:10.765 00:19:10.765 first_half: (groupid=0, jobs=1): err= 0: pid=78468: Fri Jul 26 12:12:58 2024 00:19:10.765 read: IOPS=2633, BW=10.3MiB/s (10.8MB/s)(255MiB/24778msec) 00:19:10.765 slat (nsec): min=3510, max=34024, avg=5931.82, stdev=1936.72 00:19:10.765 clat (usec): min=976, max=276576, avg=37677.64, stdev=19733.99 00:19:10.765 lat (usec): min=982, max=276582, avg=37683.57, stdev=19734.22 00:19:10.765 clat percentiles (msec): 00:19:10.765 | 1.00th=[ 7], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:19:10.765 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:19:10.765 | 70.00th=[ 35], 80.00th=[ 38], 90.00th=[ 43], 95.00th=[ 58], 00:19:10.765 | 99.00th=[ 155], 99.50th=[ 174], 99.90th=[ 215], 99.95th=[ 224], 00:19:10.765 | 99.99th=[ 268] 00:19:10.765 write: IOPS=3409, BW=13.3MiB/s (14.0MB/s)(256MiB/19224msec); 0 zone resets 00:19:10.765 slat (usec): min=4, max=545, avg= 7.96, stdev= 5.75 00:19:10.765 clat (usec): min=472, max=105530, avg=10856.63, stdev=18987.98 00:19:10.765 lat (usec): min=479, max=105541, avg=10864.59, stdev=18988.06 00:19:10.765 clat percentiles (usec): 00:19:10.765 | 1.00th=[ 1004], 5.00th=[ 1336], 10.00th=[ 1582], 20.00th=[ 1909], 00:19:10.765 | 30.00th=[ 2474], 40.00th=[ 4359], 50.00th=[ 5604], 60.00th=[ 6652], 00:19:10.765 | 70.00th=[ 7701], 80.00th=[ 10814], 90.00th=[ 13698], 95.00th=[ 73925], 00:19:10.765 | 99.00th=[ 87557], 99.50th=[ 96994], 99.90th=[102237], 99.95th=[103285], 00:19:10.765 | 99.99th=[105382] 00:19:10.765 bw ( KiB/s): min= 352, max=44656, per=89.03%, avg=21842.00, stdev=12867.40, samples=24 00:19:10.765 iops : min= 88, max=11164, avg=5460.50, stdev=3216.85, samples=24 00:19:10.765 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.40% 00:19:10.765 lat (msec) : 2=11.08%, 4=7.83%, 10=19.93%, 20=7.54%, 50=46.70% 00:19:10.765 lat (msec) : 100=5.15%, 250=1.27%, 500=0.01% 00:19:10.765 cpu : usr=99.22%, sys=0.20%, ctx=58, majf=0, minf=5571 00:19:10.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:10.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.765 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.765 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.765 second_half: (groupid=0, jobs=1): err= 0: pid=78469: Fri Jul 26 12:12:58 2024 00:19:10.765 read: IOPS=2610, BW=10.2MiB/s (10.7MB/s)(255MiB/24995msec) 00:19:10.765 slat (nsec): min=3417, max=41218, avg=5964.26, stdev=2062.92 00:19:10.765 clat (usec): min=861, max=281736, avg=36868.98, stdev=21047.94 00:19:10.765 lat (usec): min=867, max=281741, avg=36874.94, stdev=21048.16 00:19:10.765 clat percentiles (msec): 00:19:10.765 | 1.00th=[ 9], 5.00th=[ 25], 10.00th=[ 32], 20.00th=[ 33], 00:19:10.765 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:19:10.765 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 41], 95.00th=[ 51], 00:19:10.765 | 99.00th=[ 157], 99.50th=[ 178], 99.90th=[ 211], 99.95th=[ 228], 00:19:10.765 | 99.99th=[ 275] 00:19:10.765 write: IOPS=3066, BW=12.0MiB/s (12.6MB/s)(256MiB/21371msec); 0 zone resets 00:19:10.765 slat (usec): min=4, max=549, avg= 8.08, stdev= 5.40 00:19:10.765 clat (usec): min=474, max=107076, avg=12087.91, stdev=20153.43 00:19:10.765 lat (usec): min=488, max=107083, avg=12095.99, stdev=20153.54 00:19:10.765 clat percentiles (usec): 00:19:10.766 | 1.00th=[ 963], 5.00th=[ 1221], 10.00th=[ 1434], 20.00th=[ 1778], 00:19:10.766 | 30.00th=[ 2343], 40.00th=[ 3916], 50.00th=[ 5604], 60.00th=[ 6783], 00:19:10.766 | 70.00th=[ 8291], 80.00th=[ 11863], 90.00th=[ 34341], 95.00th=[ 74974], 00:19:10.766 | 99.00th=[ 88605], 99.50th=[ 98042], 99.90th=[104334], 99.95th=[104334], 00:19:10.766 | 99.99th=[106431] 00:19:10.766 bw ( KiB/s): min= 1040, max=41224, per=82.20%, avg=20167.96, stdev=11770.85, samples=26 00:19:10.766 iops : min= 260, max=10306, avg=5041.96, stdev=2942.66, samples=26 00:19:10.766 lat (usec) : 500=0.01%, 750=0.09%, 1000=0.55% 00:19:10.766 lat (msec) : 2=12.21%, 4=7.65%, 10=18.49%, 20=7.25%, 50=47.81% 00:19:10.766 lat (msec) : 100=4.31%, 250=1.63%, 500=0.01% 00:19:10.766 cpu : usr=99.26%, sys=0.22%, ctx=48, majf=0, minf=5542 00:19:10.766 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:10.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.766 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:10.766 issued rwts: total=65250,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.766 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:10.766 00:19:10.766 Run status group 0 (all jobs): 00:19:10.766 READ: bw=20.4MiB/s (21.4MB/s), 10.2MiB/s-10.3MiB/s (10.7MB/s-10.8MB/s), io=510MiB (534MB), run=24778-24995msec 00:19:10.766 WRITE: bw=24.0MiB/s (25.1MB/s), 12.0MiB/s-13.3MiB/s (12.6MB/s-14.0MB/s), io=512MiB (537MB), run=19224-21371msec 00:19:13.307 ----------------------------------------------------- 00:19:13.307 Suppressions used: 00:19:13.307 count bytes template 00:19:13.307 2 10 /usr/src/fio/parse.c 00:19:13.307 2 192 /usr/src/fio/iolog.c 00:19:13.307 1 8 libtcmalloc_minimal.so 00:19:13.307 1 904 libcrypto.so 00:19:13.307 ----------------------------------------------------- 00:19:13.307 00:19:13.307 12:13:00 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:13.307 12:13:00 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:13.307 12:13:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:13.307 12:13:00 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:13.307 12:13:00 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:13.307 12:13:00 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:13.307 12:13:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:13.307 12:13:01 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:13.307 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:13.307 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:13.307 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:13.307 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:13.307 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.307 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:19:13.308 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:13.308 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:13.308 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.308 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:19:13.308 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:13.308 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:13.308 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:13.308 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:19:13.308 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:13.308 12:13:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:13.308 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:13.308 fio-3.35 00:19:13.308 Starting 1 thread 00:19:28.187 00:19:28.187 test: (groupid=0, jobs=1): err= 0: pid=78800: Fri Jul 26 12:13:16 2024 00:19:28.187 read: IOPS=7926, BW=31.0MiB/s (32.5MB/s)(255MiB/8226msec) 00:19:28.187 slat (nsec): min=3362, max=35029, avg=5268.03, stdev=1822.13 00:19:28.187 clat (usec): min=608, max=31265, avg=16139.97, stdev=897.29 00:19:28.187 lat (usec): min=617, max=31272, avg=16145.23, stdev=897.28 00:19:28.187 clat percentiles (usec): 00:19:28.187 | 1.00th=[15008], 5.00th=[15270], 10.00th=[15401], 20.00th=[15664], 00:19:28.187 | 30.00th=[15795], 40.00th=[15926], 50.00th=[16057], 60.00th=[16188], 00:19:28.187 | 70.00th=[16319], 80.00th=[16581], 90.00th=[16909], 95.00th=[17433], 00:19:28.187 | 99.00th=[19006], 99.50th=[20317], 99.90th=[23462], 99.95th=[27395], 00:19:28.187 | 99.99th=[30802] 00:19:28.187 write: IOPS=12.8k, BW=49.9MiB/s (52.3MB/s)(256MiB/5131msec); 0 zone resets 00:19:28.187 slat (usec): min=4, max=673, avg= 7.86, stdev= 8.35 00:19:28.187 clat (usec): min=605, max=78442, avg=9974.64, stdev=12808.96 00:19:28.187 lat (usec): min=612, max=78449, avg=9982.50, stdev=12808.98 00:19:28.187 clat percentiles (usec): 00:19:28.187 | 1.00th=[ 971], 5.00th=[ 1205], 10.00th=[ 1352], 20.00th=[ 1565], 00:19:28.187 | 30.00th=[ 1762], 40.00th=[ 2409], 50.00th=[ 6259], 60.00th=[ 7308], 00:19:28.187 | 70.00th=[ 8225], 80.00th=[ 9896], 90.00th=[35390], 95.00th=[38011], 00:19:28.187 | 99.00th=[50594], 99.50th=[52691], 99.90th=[58983], 99.95th=[66323], 00:19:28.187 | 99.99th=[76022] 00:19:28.187 bw ( KiB/s): min= 7496, max=72664, per=93.29%, avg=47662.55, stdev=16625.87, samples=11 00:19:28.187 iops : min= 1874, max=18166, avg=11915.64, stdev=4156.47, samples=11 00:19:28.187 lat (usec) : 750=0.03%, 1000=0.61% 00:19:28.187 lat (msec) : 2=17.79%, 4=2.68%, 10=19.31%, 20=51.27%, 50=7.74% 00:19:28.187 lat (msec) : 100=0.56% 00:19:28.187 cpu : usr=98.80%, sys=0.53%, ctx=21, majf=0, minf=5565 00:19:28.187 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:28.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.187 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.187 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.187 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.187 00:19:28.187 Run status group 0 (all jobs): 00:19:28.187 READ: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=255MiB (267MB), run=8226-8226msec 00:19:28.187 WRITE: bw=49.9MiB/s (52.3MB/s), 49.9MiB/s-49.9MiB/s (52.3MB/s-52.3MB/s), io=256MiB (268MB), run=5131-5131msec 00:19:30.091 ----------------------------------------------------- 00:19:30.091 Suppressions used: 00:19:30.091 count bytes template 00:19:30.091 1 5 /usr/src/fio/parse.c 00:19:30.091 2 192 /usr/src/fio/iolog.c 00:19:30.091 1 8 libtcmalloc_minimal.so 00:19:30.091 1 904 libcrypto.so 00:19:30.091 ----------------------------------------------------- 00:19:30.091 00:19:30.091 12:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:30.091 12:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:30.091 12:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:30.091 12:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:30.091 12:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:30.091 12:13:17 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:30.091 Remove shared memory files 00:19:30.091 12:13:17 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:30.091 12:13:17 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:30.091 12:13:17 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62014 /dev/shm/spdk_tgt_trace.pid77056 00:19:30.091 12:13:17 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:30.091 12:13:17 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:30.091 00:19:30.091 real 1m8.785s 00:19:30.091 user 2m29.356s 00:19:30.091 sys 0m3.773s 00:19:30.091 12:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:30.091 12:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:30.091 ************************************ 00:19:30.091 END TEST ftl_fio_basic 00:19:30.091 ************************************ 00:19:30.091 12:13:17 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:30.091 12:13:17 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:30.091 12:13:17 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:30.091 12:13:17 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:30.091 ************************************ 00:19:30.091 START TEST ftl_bdevperf 00:19:30.091 ************************************ 00:19:30.091 12:13:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:30.350 * Looking for test storage... 00:19:30.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:30.350 12:13:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:30.350 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:30.350 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:30.350 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:30.350 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:30.350 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:30.350 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=79033 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 79033 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 79033 ']' 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.351 12:13:18 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:30.351 [2024-07-26 12:13:18.206093] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:19:30.351 [2024-07-26 12:13:18.206237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79033 ] 00:19:30.610 [2024-07-26 12:13:18.377036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.870 [2024-07-26 12:13:18.606529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.129 12:13:18 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.129 12:13:18 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:19:31.129 12:13:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:31.129 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:31.129 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:31.129 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:31.129 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:31.129 12:13:18 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:31.388 12:13:19 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:31.388 12:13:19 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:31.388 12:13:19 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:31.388 12:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:31.388 12:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:31.388 12:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:19:31.388 12:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:19:31.388 12:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:31.647 12:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:31.648 { 00:19:31.648 "name": "nvme0n1", 00:19:31.648 "aliases": [ 00:19:31.648 "022562ff-1076-47c3-8366-c24f27bf6a0c" 00:19:31.648 ], 00:19:31.648 "product_name": "NVMe disk", 00:19:31.648 "block_size": 4096, 00:19:31.648 "num_blocks": 1310720, 00:19:31.648 "uuid": "022562ff-1076-47c3-8366-c24f27bf6a0c", 00:19:31.648 "assigned_rate_limits": { 00:19:31.648 "rw_ios_per_sec": 0, 00:19:31.648 "rw_mbytes_per_sec": 0, 00:19:31.648 "r_mbytes_per_sec": 0, 00:19:31.648 "w_mbytes_per_sec": 0 00:19:31.648 }, 00:19:31.648 "claimed": true, 00:19:31.648 "claim_type": "read_many_write_one", 00:19:31.648 "zoned": false, 00:19:31.648 "supported_io_types": { 00:19:31.648 "read": true, 00:19:31.648 "write": true, 00:19:31.648 "unmap": true, 00:19:31.648 "flush": true, 00:19:31.648 "reset": true, 00:19:31.648 "nvme_admin": true, 00:19:31.648 "nvme_io": true, 00:19:31.648 "nvme_io_md": false, 00:19:31.648 "write_zeroes": true, 00:19:31.648 "zcopy": false, 00:19:31.648 "get_zone_info": false, 00:19:31.648 "zone_management": false, 00:19:31.648 "zone_append": false, 00:19:31.648 "compare": true, 00:19:31.648 "compare_and_write": false, 00:19:31.648 "abort": true, 00:19:31.648 "seek_hole": false, 00:19:31.648 "seek_data": false, 00:19:31.648 "copy": true, 00:19:31.648 "nvme_iov_md": false 00:19:31.648 }, 00:19:31.648 "driver_specific": { 00:19:31.648 "nvme": [ 00:19:31.648 { 00:19:31.648 "pci_address": "0000:00:11.0", 00:19:31.648 "trid": { 00:19:31.648 "trtype": "PCIe", 00:19:31.648 "traddr": "0000:00:11.0" 00:19:31.648 }, 00:19:31.648 "ctrlr_data": { 00:19:31.648 "cntlid": 0, 00:19:31.648 "vendor_id": "0x1b36", 00:19:31.648 "model_number": "QEMU NVMe Ctrl", 00:19:31.648 "serial_number": "12341", 00:19:31.648 "firmware_revision": "8.0.0", 00:19:31.648 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:31.648 "oacs": { 00:19:31.648 "security": 0, 00:19:31.648 "format": 1, 00:19:31.648 "firmware": 0, 00:19:31.648 "ns_manage": 1 00:19:31.648 }, 00:19:31.648 "multi_ctrlr": false, 00:19:31.648 "ana_reporting": false 00:19:31.648 }, 00:19:31.648 "vs": { 00:19:31.648 "nvme_version": "1.4" 00:19:31.648 }, 00:19:31.648 "ns_data": { 00:19:31.648 "id": 1, 00:19:31.648 "can_share": false 00:19:31.648 } 00:19:31.648 } 00:19:31.648 ], 00:19:31.648 "mp_policy": "active_passive" 00:19:31.648 } 00:19:31.648 } 00:19:31.648 ]' 00:19:31.648 12:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:31.648 12:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:19:31.648 12:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:31.648 12:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:31.648 12:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:31.648 12:13:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:19:31.648 12:13:19 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:31.648 12:13:19 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:31.648 12:13:19 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:31.648 12:13:19 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:31.648 12:13:19 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:31.907 12:13:19 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=96802dd9-32a2-42ce-a8fd-a3064ff2b855 00:19:31.907 12:13:19 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:31.907 12:13:19 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 96802dd9-32a2-42ce-a8fd-a3064ff2b855 00:19:32.166 12:13:19 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:32.424 12:13:20 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=b38306c5-fb8b-45c0-ad13-8c491eea7419 00:19:32.424 12:13:20 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b38306c5-fb8b-45c0-ad13-8c491eea7419 00:19:32.682 12:13:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=9eaf5276-316d-4b49-975c-d93f032f0c90 00:19:32.682 12:13:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9eaf5276-316d-4b49-975c-d93f032f0c90 00:19:32.682 12:13:20 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:32.682 12:13:20 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:32.682 12:13:20 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=9eaf5276-316d-4b49-975c-d93f032f0c90 00:19:32.682 12:13:20 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:32.682 12:13:20 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 9eaf5276-316d-4b49-975c-d93f032f0c90 00:19:32.682 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=9eaf5276-316d-4b49-975c-d93f032f0c90 00:19:32.682 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:32.682 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:19:32.682 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:19:32.682 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9eaf5276-316d-4b49-975c-d93f032f0c90 00:19:32.682 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:32.682 { 00:19:32.682 "name": "9eaf5276-316d-4b49-975c-d93f032f0c90", 00:19:32.682 "aliases": [ 00:19:32.682 "lvs/nvme0n1p0" 00:19:32.682 ], 00:19:32.682 "product_name": "Logical Volume", 00:19:32.683 "block_size": 4096, 00:19:32.683 "num_blocks": 26476544, 00:19:32.683 "uuid": "9eaf5276-316d-4b49-975c-d93f032f0c90", 00:19:32.683 "assigned_rate_limits": { 00:19:32.683 "rw_ios_per_sec": 0, 00:19:32.683 "rw_mbytes_per_sec": 0, 00:19:32.683 "r_mbytes_per_sec": 0, 00:19:32.683 "w_mbytes_per_sec": 0 00:19:32.683 }, 00:19:32.683 "claimed": false, 00:19:32.683 "zoned": false, 00:19:32.683 "supported_io_types": { 00:19:32.683 "read": true, 00:19:32.683 "write": true, 00:19:32.683 "unmap": true, 00:19:32.683 "flush": false, 00:19:32.683 "reset": true, 00:19:32.683 "nvme_admin": false, 00:19:32.683 "nvme_io": false, 00:19:32.683 "nvme_io_md": false, 00:19:32.683 "write_zeroes": true, 00:19:32.683 "zcopy": false, 00:19:32.683 "get_zone_info": false, 00:19:32.683 "zone_management": false, 00:19:32.683 "zone_append": false, 00:19:32.683 "compare": false, 00:19:32.683 "compare_and_write": false, 00:19:32.683 "abort": false, 00:19:32.683 "seek_hole": true, 00:19:32.683 "seek_data": true, 00:19:32.683 "copy": false, 00:19:32.683 "nvme_iov_md": false 00:19:32.683 }, 00:19:32.683 "driver_specific": { 00:19:32.683 "lvol": { 00:19:32.683 "lvol_store_uuid": "b38306c5-fb8b-45c0-ad13-8c491eea7419", 00:19:32.683 "base_bdev": "nvme0n1", 00:19:32.683 "thin_provision": true, 00:19:32.683 "num_allocated_clusters": 0, 00:19:32.683 "snapshot": false, 00:19:32.683 "clone": false, 00:19:32.683 "esnap_clone": false 00:19:32.683 } 00:19:32.683 } 00:19:32.683 } 00:19:32.683 ]' 00:19:32.683 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:32.683 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:19:32.683 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:32.942 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:32.942 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:32.942 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:19:32.942 12:13:20 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:32.942 12:13:20 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:32.942 12:13:20 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:33.201 12:13:20 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:33.201 12:13:20 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:33.201 12:13:20 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 9eaf5276-316d-4b49-975c-d93f032f0c90 00:19:33.201 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=9eaf5276-316d-4b49-975c-d93f032f0c90 00:19:33.201 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:33.201 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:19:33.201 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:19:33.201 12:13:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9eaf5276-316d-4b49-975c-d93f032f0c90 00:19:33.201 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:33.201 { 00:19:33.201 "name": "9eaf5276-316d-4b49-975c-d93f032f0c90", 00:19:33.201 "aliases": [ 00:19:33.201 "lvs/nvme0n1p0" 00:19:33.201 ], 00:19:33.201 "product_name": "Logical Volume", 00:19:33.201 "block_size": 4096, 00:19:33.201 "num_blocks": 26476544, 00:19:33.201 "uuid": "9eaf5276-316d-4b49-975c-d93f032f0c90", 00:19:33.201 "assigned_rate_limits": { 00:19:33.201 "rw_ios_per_sec": 0, 00:19:33.201 "rw_mbytes_per_sec": 0, 00:19:33.201 "r_mbytes_per_sec": 0, 00:19:33.201 "w_mbytes_per_sec": 0 00:19:33.201 }, 00:19:33.201 "claimed": false, 00:19:33.201 "zoned": false, 00:19:33.201 "supported_io_types": { 00:19:33.201 "read": true, 00:19:33.201 "write": true, 00:19:33.201 "unmap": true, 00:19:33.201 "flush": false, 00:19:33.201 "reset": true, 00:19:33.201 "nvme_admin": false, 00:19:33.201 "nvme_io": false, 00:19:33.201 "nvme_io_md": false, 00:19:33.201 "write_zeroes": true, 00:19:33.201 "zcopy": false, 00:19:33.201 "get_zone_info": false, 00:19:33.201 "zone_management": false, 00:19:33.201 "zone_append": false, 00:19:33.201 "compare": false, 00:19:33.201 "compare_and_write": false, 00:19:33.201 "abort": false, 00:19:33.201 "seek_hole": true, 00:19:33.201 "seek_data": true, 00:19:33.201 "copy": false, 00:19:33.201 "nvme_iov_md": false 00:19:33.201 }, 00:19:33.201 "driver_specific": { 00:19:33.201 "lvol": { 00:19:33.201 "lvol_store_uuid": "b38306c5-fb8b-45c0-ad13-8c491eea7419", 00:19:33.201 "base_bdev": "nvme0n1", 00:19:33.201 "thin_provision": true, 00:19:33.201 "num_allocated_clusters": 0, 00:19:33.201 "snapshot": false, 00:19:33.201 "clone": false, 00:19:33.201 "esnap_clone": false 00:19:33.201 } 00:19:33.201 } 00:19:33.201 } 00:19:33.201 ]' 00:19:33.201 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:33.459 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:19:33.459 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:33.459 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:33.459 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:33.459 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:19:33.459 12:13:21 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:33.459 12:13:21 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:33.459 12:13:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:19:33.717 12:13:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 9eaf5276-316d-4b49-975c-d93f032f0c90 00:19:33.717 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=9eaf5276-316d-4b49-975c-d93f032f0c90 00:19:33.717 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:33.717 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:19:33.717 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:19:33.717 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9eaf5276-316d-4b49-975c-d93f032f0c90 00:19:33.717 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:33.717 { 00:19:33.717 "name": "9eaf5276-316d-4b49-975c-d93f032f0c90", 00:19:33.717 "aliases": [ 00:19:33.717 "lvs/nvme0n1p0" 00:19:33.717 ], 00:19:33.717 "product_name": "Logical Volume", 00:19:33.717 "block_size": 4096, 00:19:33.717 "num_blocks": 26476544, 00:19:33.717 "uuid": "9eaf5276-316d-4b49-975c-d93f032f0c90", 00:19:33.717 "assigned_rate_limits": { 00:19:33.717 "rw_ios_per_sec": 0, 00:19:33.717 "rw_mbytes_per_sec": 0, 00:19:33.717 "r_mbytes_per_sec": 0, 00:19:33.717 "w_mbytes_per_sec": 0 00:19:33.717 }, 00:19:33.717 "claimed": false, 00:19:33.717 "zoned": false, 00:19:33.717 "supported_io_types": { 00:19:33.717 "read": true, 00:19:33.717 "write": true, 00:19:33.717 "unmap": true, 00:19:33.717 "flush": false, 00:19:33.717 "reset": true, 00:19:33.717 "nvme_admin": false, 00:19:33.717 "nvme_io": false, 00:19:33.717 "nvme_io_md": false, 00:19:33.717 "write_zeroes": true, 00:19:33.717 "zcopy": false, 00:19:33.717 "get_zone_info": false, 00:19:33.717 "zone_management": false, 00:19:33.717 "zone_append": false, 00:19:33.717 "compare": false, 00:19:33.717 "compare_and_write": false, 00:19:33.717 "abort": false, 00:19:33.717 "seek_hole": true, 00:19:33.717 "seek_data": true, 00:19:33.717 "copy": false, 00:19:33.717 "nvme_iov_md": false 00:19:33.717 }, 00:19:33.717 "driver_specific": { 00:19:33.717 "lvol": { 00:19:33.717 "lvol_store_uuid": "b38306c5-fb8b-45c0-ad13-8c491eea7419", 00:19:33.717 "base_bdev": "nvme0n1", 00:19:33.717 "thin_provision": true, 00:19:33.717 "num_allocated_clusters": 0, 00:19:33.717 "snapshot": false, 00:19:33.717 "clone": false, 00:19:33.717 "esnap_clone": false 00:19:33.717 } 00:19:33.717 } 00:19:33.717 } 00:19:33.717 ]' 00:19:33.717 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:33.717 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:19:33.717 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:33.976 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:33.976 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:33.976 12:13:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:19:33.976 12:13:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:19:33.976 12:13:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9eaf5276-316d-4b49-975c-d93f032f0c90 -c nvc0n1p0 --l2p_dram_limit 20 00:19:33.976 [2024-07-26 12:13:21.883476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.976 [2024-07-26 12:13:21.883539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:33.976 [2024-07-26 12:13:21.883557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:33.976 [2024-07-26 12:13:21.883568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.976 [2024-07-26 12:13:21.883629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.976 [2024-07-26 12:13:21.883642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:33.976 [2024-07-26 12:13:21.883657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:33.976 [2024-07-26 12:13:21.883667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.976 [2024-07-26 12:13:21.883690] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:33.976 [2024-07-26 12:13:21.884813] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:33.976 [2024-07-26 12:13:21.884852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.976 [2024-07-26 12:13:21.884863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:33.976 [2024-07-26 12:13:21.884877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:19:33.976 [2024-07-26 12:13:21.884887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.976 [2024-07-26 12:13:21.885051] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 837615d0-e40b-4b28-bf62-e926a15fb27e 00:19:33.976 [2024-07-26 12:13:21.886460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.976 [2024-07-26 12:13:21.886497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:33.976 [2024-07-26 12:13:21.886513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:33.976 [2024-07-26 12:13:21.886526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.976 [2024-07-26 12:13:21.893949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.976 [2024-07-26 12:13:21.893989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:33.976 [2024-07-26 12:13:21.894002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.399 ms 00:19:33.976 [2024-07-26 12:13:21.894016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.976 [2024-07-26 12:13:21.894117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.976 [2024-07-26 12:13:21.894147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:33.976 [2024-07-26 12:13:21.894158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:33.976 [2024-07-26 12:13:21.894174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.976 [2024-07-26 12:13:21.894240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.976 [2024-07-26 12:13:21.894255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:33.976 [2024-07-26 12:13:21.894265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:33.976 [2024-07-26 12:13:21.894277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.976 [2024-07-26 12:13:21.894300] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:33.976 [2024-07-26 12:13:21.900079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.976 [2024-07-26 12:13:21.900125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:33.976 [2024-07-26 12:13:21.900141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.792 ms 00:19:33.976 [2024-07-26 12:13:21.900151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.976 [2024-07-26 12:13:21.900190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.976 [2024-07-26 12:13:21.900201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:33.976 [2024-07-26 12:13:21.900213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:33.976 [2024-07-26 12:13:21.900223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.977 [2024-07-26 12:13:21.900268] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:33.977 [2024-07-26 12:13:21.900404] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:33.977 [2024-07-26 12:13:21.900429] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:33.977 [2024-07-26 12:13:21.900442] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:33.977 [2024-07-26 12:13:21.900459] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:33.977 [2024-07-26 12:13:21.900471] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:33.977 [2024-07-26 12:13:21.900485] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:33.977 [2024-07-26 12:13:21.900495] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:33.977 [2024-07-26 12:13:21.900507] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:33.977 [2024-07-26 12:13:21.900517] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:33.977 [2024-07-26 12:13:21.900529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.977 [2024-07-26 12:13:21.900539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:33.977 [2024-07-26 12:13:21.900554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:19:33.977 [2024-07-26 12:13:21.900564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.977 [2024-07-26 12:13:21.900633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.977 [2024-07-26 12:13:21.900646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:33.977 [2024-07-26 12:13:21.900660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:33.977 [2024-07-26 12:13:21.900670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.977 [2024-07-26 12:13:21.900752] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:33.977 [2024-07-26 12:13:21.900767] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:33.977 [2024-07-26 12:13:21.900781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:33.977 [2024-07-26 12:13:21.900793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.977 [2024-07-26 12:13:21.900806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:33.977 [2024-07-26 12:13:21.900814] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:33.977 [2024-07-26 12:13:21.900826] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:33.977 [2024-07-26 12:13:21.900835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:33.977 [2024-07-26 12:13:21.900847] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:33.977 [2024-07-26 12:13:21.900855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:33.977 [2024-07-26 12:13:21.900869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:33.977 [2024-07-26 12:13:21.900878] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:33.977 [2024-07-26 12:13:21.900890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:33.977 [2024-07-26 12:13:21.900899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:33.977 [2024-07-26 12:13:21.900911] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:33.977 [2024-07-26 12:13:21.900920] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.977 [2024-07-26 12:13:21.900933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:33.977 [2024-07-26 12:13:21.900942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:33.977 [2024-07-26 12:13:21.900965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.977 [2024-07-26 12:13:21.900975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:33.977 [2024-07-26 12:13:21.900987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:33.977 [2024-07-26 12:13:21.900996] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.977 [2024-07-26 12:13:21.901007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:33.977 [2024-07-26 12:13:21.901016] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:33.977 [2024-07-26 12:13:21.901027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.977 [2024-07-26 12:13:21.901036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:33.977 [2024-07-26 12:13:21.901048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:33.977 [2024-07-26 12:13:21.901056] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.977 [2024-07-26 12:13:21.901067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:33.977 [2024-07-26 12:13:21.901076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:33.977 [2024-07-26 12:13:21.901087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.977 [2024-07-26 12:13:21.901096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:33.977 [2024-07-26 12:13:21.901111] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:33.977 [2024-07-26 12:13:21.901129] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:33.977 [2024-07-26 12:13:21.901143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:33.977 [2024-07-26 12:13:21.901152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:33.977 [2024-07-26 12:13:21.901163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:33.977 [2024-07-26 12:13:21.901172] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:33.977 [2024-07-26 12:13:21.901184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:33.977 [2024-07-26 12:13:21.901193] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.977 [2024-07-26 12:13:21.901204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:33.977 [2024-07-26 12:13:21.901213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:33.977 [2024-07-26 12:13:21.901225] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.977 [2024-07-26 12:13:21.901234] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:33.977 [2024-07-26 12:13:21.901246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:33.977 [2024-07-26 12:13:21.901256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:33.977 [2024-07-26 12:13:21.901268] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.977 [2024-07-26 12:13:21.901279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:33.977 [2024-07-26 12:13:21.901293] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:33.977 [2024-07-26 12:13:21.901302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:33.977 [2024-07-26 12:13:21.901313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:33.977 [2024-07-26 12:13:21.901322] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:33.977 [2024-07-26 12:13:21.901334] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:33.977 [2024-07-26 12:13:21.901347] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:33.977 [2024-07-26 12:13:21.901362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:33.977 [2024-07-26 12:13:21.901373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:33.977 [2024-07-26 12:13:21.901386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:33.977 [2024-07-26 12:13:21.901397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:33.977 [2024-07-26 12:13:21.901409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:33.977 [2024-07-26 12:13:21.901419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:33.977 [2024-07-26 12:13:21.901433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:33.977 [2024-07-26 12:13:21.901443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:33.977 [2024-07-26 12:13:21.901455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:33.978 [2024-07-26 12:13:21.901465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:33.978 [2024-07-26 12:13:21.901480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:33.978 [2024-07-26 12:13:21.901490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:33.978 [2024-07-26 12:13:21.901503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:33.978 [2024-07-26 12:13:21.901513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:33.978 [2024-07-26 12:13:21.901525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:33.978 [2024-07-26 12:13:21.901534] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:33.978 [2024-07-26 12:13:21.901548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:33.978 [2024-07-26 12:13:21.901559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:33.978 [2024-07-26 12:13:21.901571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:33.978 [2024-07-26 12:13:21.901581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:33.978 [2024-07-26 12:13:21.901594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:33.978 [2024-07-26 12:13:21.901604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.978 [2024-07-26 12:13:21.901620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:33.978 [2024-07-26 12:13:21.901637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.911 ms 00:19:33.978 [2024-07-26 12:13:21.901650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.978 [2024-07-26 12:13:21.901686] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:33.978 [2024-07-26 12:13:21.901707] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:37.314 [2024-07-26 12:13:25.043299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.314 [2024-07-26 12:13:25.043367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:37.314 [2024-07-26 12:13:25.043388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3146.709 ms 00:19:37.314 [2024-07-26 12:13:25.043401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.314 [2024-07-26 12:13:25.098908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.314 [2024-07-26 12:13:25.098960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.314 [2024-07-26 12:13:25.098975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.170 ms 00:19:37.314 [2024-07-26 12:13:25.098988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.314 [2024-07-26 12:13:25.099153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.314 [2024-07-26 12:13:25.099170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:37.314 [2024-07-26 12:13:25.099181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:37.314 [2024-07-26 12:13:25.099196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.314 [2024-07-26 12:13:25.150037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.314 [2024-07-26 12:13:25.150093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.314 [2024-07-26 12:13:25.150108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.881 ms 00:19:37.314 [2024-07-26 12:13:25.150129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.314 [2024-07-26 12:13:25.150176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.314 [2024-07-26 12:13:25.150189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.314 [2024-07-26 12:13:25.150200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:37.314 [2024-07-26 12:13:25.150212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.314 [2024-07-26 12:13:25.150711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.314 [2024-07-26 12:13:25.150736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.314 [2024-07-26 12:13:25.150748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:19:37.314 [2024-07-26 12:13:25.150760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.314 [2024-07-26 12:13:25.150869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.314 [2024-07-26 12:13:25.150884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.314 [2024-07-26 12:13:25.150897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:37.314 [2024-07-26 12:13:25.150912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.314 [2024-07-26 12:13:25.171809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.314 [2024-07-26 12:13:25.171852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.314 [2024-07-26 12:13:25.171867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.912 ms 00:19:37.314 [2024-07-26 12:13:25.171880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.314 [2024-07-26 12:13:25.185491] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:37.314 [2024-07-26 12:13:25.191423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.314 [2024-07-26 12:13:25.191460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:37.314 [2024-07-26 12:13:25.191476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.475 ms 00:19:37.314 [2024-07-26 12:13:25.191487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.314 [2024-07-26 12:13:25.278098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.314 [2024-07-26 12:13:25.278184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:37.314 [2024-07-26 12:13:25.278213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.710 ms 00:19:37.314 [2024-07-26 12:13:25.278224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.314 [2024-07-26 12:13:25.278417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.314 [2024-07-26 12:13:25.278430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:37.314 [2024-07-26 12:13:25.278448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:19:37.314 [2024-07-26 12:13:25.278458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.573 [2024-07-26 12:13:25.317894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.573 [2024-07-26 12:13:25.317944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:37.573 [2024-07-26 12:13:25.317962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.441 ms 00:19:37.573 [2024-07-26 12:13:25.317973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.573 [2024-07-26 12:13:25.358434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.573 [2024-07-26 12:13:25.358495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:37.573 [2024-07-26 12:13:25.358515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.476 ms 00:19:37.573 [2024-07-26 12:13:25.358524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.573 [2024-07-26 12:13:25.359369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.573 [2024-07-26 12:13:25.359398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:37.573 [2024-07-26 12:13:25.359413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:19:37.573 [2024-07-26 12:13:25.359423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.573 [2024-07-26 12:13:25.466935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.573 [2024-07-26 12:13:25.466997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:37.573 [2024-07-26 12:13:25.467020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.595 ms 00:19:37.573 [2024-07-26 12:13:25.467031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.573 [2024-07-26 12:13:25.506049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.573 [2024-07-26 12:13:25.506110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:37.573 [2024-07-26 12:13:25.506135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.032 ms 00:19:37.573 [2024-07-26 12:13:25.506149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.573 [2024-07-26 12:13:25.543774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.573 [2024-07-26 12:13:25.543831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:37.573 [2024-07-26 12:13:25.543848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.637 ms 00:19:37.573 [2024-07-26 12:13:25.543858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.831 [2024-07-26 12:13:25.580839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.831 [2024-07-26 12:13:25.580897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:37.831 [2024-07-26 12:13:25.580915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.994 ms 00:19:37.831 [2024-07-26 12:13:25.580925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.831 [2024-07-26 12:13:25.580973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.831 [2024-07-26 12:13:25.580985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:37.831 [2024-07-26 12:13:25.581002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:37.831 [2024-07-26 12:13:25.581012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.831 [2024-07-26 12:13:25.581110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.831 [2024-07-26 12:13:25.581138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:37.831 [2024-07-26 12:13:25.581152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:37.831 [2024-07-26 12:13:25.581164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.831 [2024-07-26 12:13:25.582249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3704.304 ms, result 0 00:19:37.831 { 00:19:37.831 "name": "ftl0", 00:19:37.831 "uuid": "837615d0-e40b-4b28-bf62-e926a15fb27e" 00:19:37.831 } 00:19:37.831 12:13:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:19:37.831 12:13:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:37.831 12:13:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:19:37.831 12:13:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:38.088 [2024-07-26 12:13:25.874451] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:38.088 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:38.088 Zero copy mechanism will not be used. 00:19:38.088 Running I/O for 4 seconds... 00:19:42.302 00:19:42.302 Latency(us) 00:19:42.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.302 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:42.302 ftl0 : 4.00 1802.68 119.71 0.00 0.00 580.75 185.88 1480.48 00:19:42.302 =================================================================================================================== 00:19:42.302 Total : 1802.68 119.71 0.00 0.00 580.75 185.88 1480.48 00:19:42.302 [2024-07-26 12:13:29.878803] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:42.302 0 00:19:42.302 12:13:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:42.302 [2024-07-26 12:13:29.988927] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:42.302 Running I/O for 4 seconds... 00:19:46.484 00:19:46.484 Latency(us) 00:19:46.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.484 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:46.484 ftl0 : 4.01 10452.38 40.83 0.00 0.00 12220.67 250.04 35163.09 00:19:46.484 =================================================================================================================== 00:19:46.484 Total : 10452.38 40.83 0.00 0.00 12220.67 0.00 35163.09 00:19:46.484 [2024-07-26 12:13:34.006592] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:46.484 0 00:19:46.484 12:13:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:46.484 [2024-07-26 12:13:34.127095] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:46.484 Running I/O for 4 seconds... 00:19:50.676 00:19:50.676 Latency(us) 00:19:50.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.676 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:50.676 Verification LBA range: start 0x0 length 0x1400000 00:19:50.676 ftl0 : 4.01 8205.93 32.05 0.00 0.00 15552.22 271.42 30109.71 00:19:50.676 =================================================================================================================== 00:19:50.676 Total : 8205.93 32.05 0.00 0.00 15552.22 0.00 30109.71 00:19:50.676 [2024-07-26 12:13:38.149301] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:50.676 0 00:19:50.676 12:13:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:50.676 [2024-07-26 12:13:38.340942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.676 [2024-07-26 12:13:38.341003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:50.676 [2024-07-26 12:13:38.341025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:50.676 [2024-07-26 12:13:38.341036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.676 [2024-07-26 12:13:38.341062] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.676 [2024-07-26 12:13:38.344979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.676 [2024-07-26 12:13:38.345025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:50.676 [2024-07-26 12:13:38.345039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.906 ms 00:19:50.676 [2024-07-26 12:13:38.345051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.676 [2024-07-26 12:13:38.346733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.676 [2024-07-26 12:13:38.346787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:50.676 [2024-07-26 12:13:38.346801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.656 ms 00:19:50.676 [2024-07-26 12:13:38.346814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.676 [2024-07-26 12:13:38.560471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.676 [2024-07-26 12:13:38.560552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:50.676 [2024-07-26 12:13:38.560570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 213.974 ms 00:19:50.676 [2024-07-26 12:13:38.560588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.676 [2024-07-26 12:13:38.565805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.676 [2024-07-26 12:13:38.565848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:50.676 [2024-07-26 12:13:38.565861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.185 ms 00:19:50.676 [2024-07-26 12:13:38.565874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.676 [2024-07-26 12:13:38.603751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.676 [2024-07-26 12:13:38.603799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:50.676 [2024-07-26 12:13:38.603813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.853 ms 00:19:50.676 [2024-07-26 12:13:38.603826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.676 [2024-07-26 12:13:38.626894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.676 [2024-07-26 12:13:38.626946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:50.676 [2024-07-26 12:13:38.626964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.065 ms 00:19:50.676 [2024-07-26 12:13:38.626977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.676 [2024-07-26 12:13:38.627146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.676 [2024-07-26 12:13:38.627165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:50.676 [2024-07-26 12:13:38.627176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:50.676 [2024-07-26 12:13:38.627192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.936 [2024-07-26 12:13:38.666548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.936 [2024-07-26 12:13:38.666597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:50.936 [2024-07-26 12:13:38.666611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.404 ms 00:19:50.936 [2024-07-26 12:13:38.666624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.936 [2024-07-26 12:13:38.706018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.936 [2024-07-26 12:13:38.706086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:50.936 [2024-07-26 12:13:38.706101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.418 ms 00:19:50.936 [2024-07-26 12:13:38.706113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.936 [2024-07-26 12:13:38.746324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.936 [2024-07-26 12:13:38.746389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:50.936 [2024-07-26 12:13:38.746404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.219 ms 00:19:50.936 [2024-07-26 12:13:38.746417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.936 [2024-07-26 12:13:38.786351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.936 [2024-07-26 12:13:38.786410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:50.936 [2024-07-26 12:13:38.786426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.891 ms 00:19:50.936 [2024-07-26 12:13:38.786442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.936 [2024-07-26 12:13:38.786482] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:50.936 [2024-07-26 12:13:38.786502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:50.936 [2024-07-26 12:13:38.786856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.786869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.786880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.786893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.786905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.786920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.786932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.786946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.786957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.786970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.786981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.786994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:50.937 [2024-07-26 12:13:38.787795] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:50.937 [2024-07-26 12:13:38.787805] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 837615d0-e40b-4b28-bf62-e926a15fb27e 00:19:50.937 [2024-07-26 12:13:38.787819] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:50.937 [2024-07-26 12:13:38.787828] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:50.937 [2024-07-26 12:13:38.787845] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:50.937 [2024-07-26 12:13:38.787855] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:50.937 [2024-07-26 12:13:38.787872] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:50.937 [2024-07-26 12:13:38.787883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:50.937 [2024-07-26 12:13:38.787895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:50.937 [2024-07-26 12:13:38.787903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:50.937 [2024-07-26 12:13:38.787917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:50.937 [2024-07-26 12:13:38.787927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.937 [2024-07-26 12:13:38.787940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:50.937 [2024-07-26 12:13:38.787950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.449 ms 00:19:50.937 [2024-07-26 12:13:38.787962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.937 [2024-07-26 12:13:38.808077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.937 [2024-07-26 12:13:38.808142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:50.937 [2024-07-26 12:13:38.808157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.094 ms 00:19:50.938 [2024-07-26 12:13:38.808169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.938 [2024-07-26 12:13:38.808601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.938 [2024-07-26 12:13:38.808620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:50.938 [2024-07-26 12:13:38.808631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:19:50.938 [2024-07-26 12:13:38.808644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.938 [2024-07-26 12:13:38.856710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.938 [2024-07-26 12:13:38.856777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.938 [2024-07-26 12:13:38.856792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.938 [2024-07-26 12:13:38.856808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.938 [2024-07-26 12:13:38.856880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.938 [2024-07-26 12:13:38.856894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.938 [2024-07-26 12:13:38.856904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.938 [2024-07-26 12:13:38.856916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.938 [2024-07-26 12:13:38.857022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.938 [2024-07-26 12:13:38.857039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.938 [2024-07-26 12:13:38.857050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.938 [2024-07-26 12:13:38.857062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.938 [2024-07-26 12:13:38.857079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.938 [2024-07-26 12:13:38.857092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.938 [2024-07-26 12:13:38.857102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.938 [2024-07-26 12:13:38.857114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.197 [2024-07-26 12:13:38.979470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.197 [2024-07-26 12:13:38.979539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.197 [2024-07-26 12:13:38.979554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.197 [2024-07-26 12:13:38.979570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.197 [2024-07-26 12:13:39.080152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.197 [2024-07-26 12:13:39.080215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.197 [2024-07-26 12:13:39.080230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.197 [2024-07-26 12:13:39.080243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.197 [2024-07-26 12:13:39.080348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.197 [2024-07-26 12:13:39.080367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.197 [2024-07-26 12:13:39.080377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.197 [2024-07-26 12:13:39.080389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.197 [2024-07-26 12:13:39.080442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.197 [2024-07-26 12:13:39.080457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.197 [2024-07-26 12:13:39.080467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.197 [2024-07-26 12:13:39.080479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.197 [2024-07-26 12:13:39.080583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.197 [2024-07-26 12:13:39.080599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.197 [2024-07-26 12:13:39.080612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.197 [2024-07-26 12:13:39.080627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.197 [2024-07-26 12:13:39.080661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.197 [2024-07-26 12:13:39.080676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:51.197 [2024-07-26 12:13:39.080686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.197 [2024-07-26 12:13:39.080699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.197 [2024-07-26 12:13:39.080735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.197 [2024-07-26 12:13:39.080748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.197 [2024-07-26 12:13:39.080758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.197 [2024-07-26 12:13:39.080773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.197 [2024-07-26 12:13:39.080816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.197 [2024-07-26 12:13:39.080830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.197 [2024-07-26 12:13:39.080840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.197 [2024-07-26 12:13:39.080852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.197 [2024-07-26 12:13:39.080971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 741.200 ms, result 0 00:19:51.197 true 00:19:51.197 12:13:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 79033 00:19:51.197 12:13:39 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 79033 ']' 00:19:51.197 12:13:39 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 79033 00:19:51.197 12:13:39 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:19:51.197 12:13:39 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:51.197 12:13:39 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79033 00:19:51.197 12:13:39 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:51.197 12:13:39 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:51.197 killing process with pid 79033 00:19:51.197 12:13:39 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79033' 00:19:51.197 Received shutdown signal, test time was about 4.000000 seconds 00:19:51.197 00:19:51.197 Latency(us) 00:19:51.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.197 =================================================================================================================== 00:19:51.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:51.197 12:13:39 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 79033 00:19:51.197 12:13:39 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 79033 00:19:55.388 12:13:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:19:55.388 12:13:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:19:55.388 12:13:43 ftl.ftl_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:55.388 12:13:43 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:55.388 12:13:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:19:55.388 Remove shared memory files 00:19:55.388 12:13:43 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:55.388 12:13:43 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:55.388 12:13:43 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:55.388 12:13:43 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:55.388 12:13:43 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:55.388 12:13:43 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:55.388 00:19:55.388 real 0m25.108s 00:19:55.388 user 0m27.399s 00:19:55.388 sys 0m1.277s 00:19:55.388 12:13:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:55.388 12:13:43 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:55.388 ************************************ 00:19:55.388 END TEST ftl_bdevperf 00:19:55.388 ************************************ 00:19:55.388 12:13:43 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:55.388 12:13:43 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:55.388 12:13:43 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:55.388 12:13:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:55.388 ************************************ 00:19:55.388 START TEST ftl_trim 00:19:55.388 ************************************ 00:19:55.388 12:13:43 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:55.388 * Looking for test storage... 00:19:55.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:55.388 12:13:43 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:55.388 12:13:43 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:55.388 12:13:43 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:55.388 12:13:43 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:55.388 12:13:43 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:55.388 12:13:43 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:55.388 12:13:43 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:55.388 12:13:43 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=79389 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:55.389 12:13:43 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 79389 00:19:55.389 12:13:43 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79389 ']' 00:19:55.389 12:13:43 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.389 12:13:43 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.389 12:13:43 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.389 12:13:43 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.389 12:13:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:55.685 [2024-07-26 12:13:43.431710] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:19:55.685 [2024-07-26 12:13:43.431855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79389 ] 00:19:55.685 [2024-07-26 12:13:43.598382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:55.943 [2024-07-26 12:13:43.859249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.943 [2024-07-26 12:13:43.859263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.943 [2024-07-26 12:13:43.859279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.875 12:13:44 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.875 12:13:44 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:19:56.875 12:13:44 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:56.875 12:13:44 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:56.875 12:13:44 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:56.875 12:13:44 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:56.875 12:13:44 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:56.875 12:13:44 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:57.134 12:13:45 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:57.134 12:13:45 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:57.134 12:13:45 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:57.134 12:13:45 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:57.134 12:13:45 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:57.134 12:13:45 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:19:57.134 12:13:45 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:19:57.134 12:13:45 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:57.393 12:13:45 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:57.393 { 00:19:57.393 "name": "nvme0n1", 00:19:57.393 "aliases": [ 00:19:57.393 "f1cafc80-63e7-4979-aed8-de8b39d4ca4e" 00:19:57.393 ], 00:19:57.393 "product_name": "NVMe disk", 00:19:57.393 "block_size": 4096, 00:19:57.393 "num_blocks": 1310720, 00:19:57.393 "uuid": "f1cafc80-63e7-4979-aed8-de8b39d4ca4e", 00:19:57.393 "assigned_rate_limits": { 00:19:57.393 "rw_ios_per_sec": 0, 00:19:57.393 "rw_mbytes_per_sec": 0, 00:19:57.393 "r_mbytes_per_sec": 0, 00:19:57.393 "w_mbytes_per_sec": 0 00:19:57.393 }, 00:19:57.393 "claimed": true, 00:19:57.393 "claim_type": "read_many_write_one", 00:19:57.393 "zoned": false, 00:19:57.393 "supported_io_types": { 00:19:57.393 "read": true, 00:19:57.393 "write": true, 00:19:57.393 "unmap": true, 00:19:57.393 "flush": true, 00:19:57.393 "reset": true, 00:19:57.393 "nvme_admin": true, 00:19:57.393 "nvme_io": true, 00:19:57.393 "nvme_io_md": false, 00:19:57.393 "write_zeroes": true, 00:19:57.393 "zcopy": false, 00:19:57.393 "get_zone_info": false, 00:19:57.393 "zone_management": false, 00:19:57.393 "zone_append": false, 00:19:57.393 "compare": true, 00:19:57.393 "compare_and_write": false, 00:19:57.393 "abort": true, 00:19:57.393 "seek_hole": false, 00:19:57.393 "seek_data": false, 00:19:57.393 "copy": true, 00:19:57.393 "nvme_iov_md": false 00:19:57.393 }, 00:19:57.393 "driver_specific": { 00:19:57.393 "nvme": [ 00:19:57.393 { 00:19:57.393 "pci_address": "0000:00:11.0", 00:19:57.393 "trid": { 00:19:57.393 "trtype": "PCIe", 00:19:57.393 "traddr": "0000:00:11.0" 00:19:57.393 }, 00:19:57.393 "ctrlr_data": { 00:19:57.393 "cntlid": 0, 00:19:57.393 "vendor_id": "0x1b36", 00:19:57.393 "model_number": "QEMU NVMe Ctrl", 00:19:57.393 "serial_number": "12341", 00:19:57.393 "firmware_revision": "8.0.0", 00:19:57.393 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:57.393 "oacs": { 00:19:57.393 "security": 0, 00:19:57.393 "format": 1, 00:19:57.393 "firmware": 0, 00:19:57.393 "ns_manage": 1 00:19:57.393 }, 00:19:57.393 "multi_ctrlr": false, 00:19:57.393 "ana_reporting": false 00:19:57.393 }, 00:19:57.393 "vs": { 00:19:57.393 "nvme_version": "1.4" 00:19:57.393 }, 00:19:57.393 "ns_data": { 00:19:57.393 "id": 1, 00:19:57.393 "can_share": false 00:19:57.393 } 00:19:57.393 } 00:19:57.393 ], 00:19:57.393 "mp_policy": "active_passive" 00:19:57.393 } 00:19:57.393 } 00:19:57.393 ]' 00:19:57.393 12:13:45 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:57.393 12:13:45 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:19:57.393 12:13:45 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:57.651 12:13:45 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:57.651 12:13:45 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:57.651 12:13:45 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:19:57.651 12:13:45 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:57.651 12:13:45 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:57.651 12:13:45 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:57.651 12:13:45 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:57.651 12:13:45 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:57.651 12:13:45 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=b38306c5-fb8b-45c0-ad13-8c491eea7419 00:19:57.652 12:13:45 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:57.652 12:13:45 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b38306c5-fb8b-45c0-ad13-8c491eea7419 00:19:57.911 12:13:45 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:58.171 12:13:45 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=da950123-7fee-4d9f-b8b3-40b2d4c9e80f 00:19:58.171 12:13:46 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u da950123-7fee-4d9f-b8b3-40b2d4c9e80f 00:19:58.429 12:13:46 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=75450329-7622-46e7-b9a7-9f1f2ad8eac3 00:19:58.429 12:13:46 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 75450329-7622-46e7-b9a7-9f1f2ad8eac3 00:19:58.429 12:13:46 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:58.429 12:13:46 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:58.429 12:13:46 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=75450329-7622-46e7-b9a7-9f1f2ad8eac3 00:19:58.429 12:13:46 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:58.429 12:13:46 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 75450329-7622-46e7-b9a7-9f1f2ad8eac3 00:19:58.429 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=75450329-7622-46e7-b9a7-9f1f2ad8eac3 00:19:58.429 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:58.429 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:19:58.429 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:19:58.429 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 75450329-7622-46e7-b9a7-9f1f2ad8eac3 00:19:58.429 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:58.429 { 00:19:58.429 "name": "75450329-7622-46e7-b9a7-9f1f2ad8eac3", 00:19:58.429 "aliases": [ 00:19:58.429 "lvs/nvme0n1p0" 00:19:58.429 ], 00:19:58.429 "product_name": "Logical Volume", 00:19:58.429 "block_size": 4096, 00:19:58.429 "num_blocks": 26476544, 00:19:58.429 "uuid": "75450329-7622-46e7-b9a7-9f1f2ad8eac3", 00:19:58.429 "assigned_rate_limits": { 00:19:58.429 "rw_ios_per_sec": 0, 00:19:58.429 "rw_mbytes_per_sec": 0, 00:19:58.429 "r_mbytes_per_sec": 0, 00:19:58.429 "w_mbytes_per_sec": 0 00:19:58.429 }, 00:19:58.429 "claimed": false, 00:19:58.429 "zoned": false, 00:19:58.429 "supported_io_types": { 00:19:58.429 "read": true, 00:19:58.429 "write": true, 00:19:58.429 "unmap": true, 00:19:58.429 "flush": false, 00:19:58.429 "reset": true, 00:19:58.429 "nvme_admin": false, 00:19:58.429 "nvme_io": false, 00:19:58.429 "nvme_io_md": false, 00:19:58.429 "write_zeroes": true, 00:19:58.429 "zcopy": false, 00:19:58.429 "get_zone_info": false, 00:19:58.429 "zone_management": false, 00:19:58.429 "zone_append": false, 00:19:58.429 "compare": false, 00:19:58.429 "compare_and_write": false, 00:19:58.429 "abort": false, 00:19:58.429 "seek_hole": true, 00:19:58.429 "seek_data": true, 00:19:58.429 "copy": false, 00:19:58.429 "nvme_iov_md": false 00:19:58.429 }, 00:19:58.429 "driver_specific": { 00:19:58.429 "lvol": { 00:19:58.429 "lvol_store_uuid": "da950123-7fee-4d9f-b8b3-40b2d4c9e80f", 00:19:58.429 "base_bdev": "nvme0n1", 00:19:58.429 "thin_provision": true, 00:19:58.429 "num_allocated_clusters": 0, 00:19:58.429 "snapshot": false, 00:19:58.429 "clone": false, 00:19:58.429 "esnap_clone": false 00:19:58.429 } 00:19:58.429 } 00:19:58.430 } 00:19:58.430 ]' 00:19:58.430 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:58.689 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:19:58.689 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:58.689 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:58.689 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:58.689 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:19:58.689 12:13:46 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:58.689 12:13:46 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:58.689 12:13:46 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:58.958 12:13:46 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:58.958 12:13:46 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:58.958 12:13:46 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 75450329-7622-46e7-b9a7-9f1f2ad8eac3 00:19:58.958 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=75450329-7622-46e7-b9a7-9f1f2ad8eac3 00:19:58.958 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:58.958 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:19:58.958 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:19:58.958 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 75450329-7622-46e7-b9a7-9f1f2ad8eac3 00:19:59.216 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:59.216 { 00:19:59.216 "name": "75450329-7622-46e7-b9a7-9f1f2ad8eac3", 00:19:59.216 "aliases": [ 00:19:59.216 "lvs/nvme0n1p0" 00:19:59.216 ], 00:19:59.216 "product_name": "Logical Volume", 00:19:59.216 "block_size": 4096, 00:19:59.216 "num_blocks": 26476544, 00:19:59.216 "uuid": "75450329-7622-46e7-b9a7-9f1f2ad8eac3", 00:19:59.216 "assigned_rate_limits": { 00:19:59.216 "rw_ios_per_sec": 0, 00:19:59.216 "rw_mbytes_per_sec": 0, 00:19:59.216 "r_mbytes_per_sec": 0, 00:19:59.216 "w_mbytes_per_sec": 0 00:19:59.216 }, 00:19:59.216 "claimed": false, 00:19:59.216 "zoned": false, 00:19:59.216 "supported_io_types": { 00:19:59.216 "read": true, 00:19:59.216 "write": true, 00:19:59.216 "unmap": true, 00:19:59.216 "flush": false, 00:19:59.216 "reset": true, 00:19:59.216 "nvme_admin": false, 00:19:59.216 "nvme_io": false, 00:19:59.216 "nvme_io_md": false, 00:19:59.216 "write_zeroes": true, 00:19:59.216 "zcopy": false, 00:19:59.216 "get_zone_info": false, 00:19:59.216 "zone_management": false, 00:19:59.216 "zone_append": false, 00:19:59.216 "compare": false, 00:19:59.216 "compare_and_write": false, 00:19:59.216 "abort": false, 00:19:59.216 "seek_hole": true, 00:19:59.216 "seek_data": true, 00:19:59.216 "copy": false, 00:19:59.216 "nvme_iov_md": false 00:19:59.216 }, 00:19:59.216 "driver_specific": { 00:19:59.216 "lvol": { 00:19:59.216 "lvol_store_uuid": "da950123-7fee-4d9f-b8b3-40b2d4c9e80f", 00:19:59.216 "base_bdev": "nvme0n1", 00:19:59.216 "thin_provision": true, 00:19:59.216 "num_allocated_clusters": 0, 00:19:59.216 "snapshot": false, 00:19:59.216 "clone": false, 00:19:59.216 "esnap_clone": false 00:19:59.216 } 00:19:59.216 } 00:19:59.216 } 00:19:59.216 ]' 00:19:59.216 12:13:46 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:59.216 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:19:59.216 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:59.216 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:59.216 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:59.216 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:19:59.216 12:13:47 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:59.216 12:13:47 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:59.474 12:13:47 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:59.474 12:13:47 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:59.474 12:13:47 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 75450329-7622-46e7-b9a7-9f1f2ad8eac3 00:19:59.474 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=75450329-7622-46e7-b9a7-9f1f2ad8eac3 00:19:59.474 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:59.474 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:19:59.474 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:19:59.474 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 75450329-7622-46e7-b9a7-9f1f2ad8eac3 00:19:59.732 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:59.732 { 00:19:59.732 "name": "75450329-7622-46e7-b9a7-9f1f2ad8eac3", 00:19:59.732 "aliases": [ 00:19:59.732 "lvs/nvme0n1p0" 00:19:59.732 ], 00:19:59.732 "product_name": "Logical Volume", 00:19:59.732 "block_size": 4096, 00:19:59.732 "num_blocks": 26476544, 00:19:59.732 "uuid": "75450329-7622-46e7-b9a7-9f1f2ad8eac3", 00:19:59.732 "assigned_rate_limits": { 00:19:59.732 "rw_ios_per_sec": 0, 00:19:59.732 "rw_mbytes_per_sec": 0, 00:19:59.732 "r_mbytes_per_sec": 0, 00:19:59.732 "w_mbytes_per_sec": 0 00:19:59.732 }, 00:19:59.732 "claimed": false, 00:19:59.732 "zoned": false, 00:19:59.732 "supported_io_types": { 00:19:59.732 "read": true, 00:19:59.732 "write": true, 00:19:59.732 "unmap": true, 00:19:59.732 "flush": false, 00:19:59.732 "reset": true, 00:19:59.732 "nvme_admin": false, 00:19:59.732 "nvme_io": false, 00:19:59.732 "nvme_io_md": false, 00:19:59.732 "write_zeroes": true, 00:19:59.732 "zcopy": false, 00:19:59.732 "get_zone_info": false, 00:19:59.732 "zone_management": false, 00:19:59.732 "zone_append": false, 00:19:59.732 "compare": false, 00:19:59.732 "compare_and_write": false, 00:19:59.732 "abort": false, 00:19:59.732 "seek_hole": true, 00:19:59.732 "seek_data": true, 00:19:59.732 "copy": false, 00:19:59.732 "nvme_iov_md": false 00:19:59.732 }, 00:19:59.732 "driver_specific": { 00:19:59.732 "lvol": { 00:19:59.732 "lvol_store_uuid": "da950123-7fee-4d9f-b8b3-40b2d4c9e80f", 00:19:59.732 "base_bdev": "nvme0n1", 00:19:59.732 "thin_provision": true, 00:19:59.732 "num_allocated_clusters": 0, 00:19:59.732 "snapshot": false, 00:19:59.732 "clone": false, 00:19:59.732 "esnap_clone": false 00:19:59.732 } 00:19:59.732 } 00:19:59.732 } 00:19:59.732 ]' 00:19:59.732 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:59.732 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:19:59.732 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:59.732 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:59.732 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:59.732 12:13:47 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:19:59.732 12:13:47 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:59.732 12:13:47 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 75450329-7622-46e7-b9a7-9f1f2ad8eac3 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:59.991 [2024-07-26 12:13:47.744037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.991 [2024-07-26 12:13:47.744095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:59.991 [2024-07-26 12:13:47.744111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:59.991 [2024-07-26 12:13:47.744143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.991 [2024-07-26 12:13:47.747531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.991 [2024-07-26 12:13:47.747574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:59.991 [2024-07-26 12:13:47.747588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.361 ms 00:19:59.991 [2024-07-26 12:13:47.747600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.991 [2024-07-26 12:13:47.747739] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:59.991 [2024-07-26 12:13:47.748819] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:59.991 [2024-07-26 12:13:47.748849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.991 [2024-07-26 12:13:47.748866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:59.991 [2024-07-26 12:13:47.748877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:19:59.991 [2024-07-26 12:13:47.748889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.991 [2024-07-26 12:13:47.748997] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID bc106026-f60a-43f0-978e-e006bfb6b3f6 00:19:59.991 [2024-07-26 12:13:47.750417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.991 [2024-07-26 12:13:47.750448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:59.991 [2024-07-26 12:13:47.750463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:59.991 [2024-07-26 12:13:47.750473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.991 [2024-07-26 12:13:47.758099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.991 [2024-07-26 12:13:47.758155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:59.991 [2024-07-26 12:13:47.758171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.554 ms 00:19:59.991 [2024-07-26 12:13:47.758182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.991 [2024-07-26 12:13:47.758390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.991 [2024-07-26 12:13:47.758408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:59.991 [2024-07-26 12:13:47.758422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:59.991 [2024-07-26 12:13:47.758433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.991 [2024-07-26 12:13:47.758486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.991 [2024-07-26 12:13:47.758501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:59.991 [2024-07-26 12:13:47.758515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:59.991 [2024-07-26 12:13:47.758526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.991 [2024-07-26 12:13:47.758574] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:59.991 [2024-07-26 12:13:47.764529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.991 [2024-07-26 12:13:47.764574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:59.991 [2024-07-26 12:13:47.764587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.976 ms 00:19:59.991 [2024-07-26 12:13:47.764601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.991 [2024-07-26 12:13:47.764680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.991 [2024-07-26 12:13:47.764695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:59.991 [2024-07-26 12:13:47.764706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:59.991 [2024-07-26 12:13:47.764718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.991 [2024-07-26 12:13:47.764750] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:59.991 [2024-07-26 12:13:47.764884] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:59.991 [2024-07-26 12:13:47.764898] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:59.991 [2024-07-26 12:13:47.764968] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:59.991 [2024-07-26 12:13:47.764982] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:59.991 [2024-07-26 12:13:47.764996] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:59.991 [2024-07-26 12:13:47.765010] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:59.991 [2024-07-26 12:13:47.765023] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:59.991 [2024-07-26 12:13:47.765033] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:59.991 [2024-07-26 12:13:47.765068] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:59.992 [2024-07-26 12:13:47.765079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.992 [2024-07-26 12:13:47.765091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:59.992 [2024-07-26 12:13:47.765102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:19:59.992 [2024-07-26 12:13:47.765114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.992 [2024-07-26 12:13:47.765213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.992 [2024-07-26 12:13:47.765228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:59.992 [2024-07-26 12:13:47.765238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:59.992 [2024-07-26 12:13:47.765252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.992 [2024-07-26 12:13:47.765359] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:59.992 [2024-07-26 12:13:47.765377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:59.992 [2024-07-26 12:13:47.765388] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:59.992 [2024-07-26 12:13:47.765401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.992 [2024-07-26 12:13:47.765411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:59.992 [2024-07-26 12:13:47.765423] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:59.992 [2024-07-26 12:13:47.765432] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:59.992 [2024-07-26 12:13:47.765443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:59.992 [2024-07-26 12:13:47.765452] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:59.992 [2024-07-26 12:13:47.765464] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.992 [2024-07-26 12:13:47.765473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:59.992 [2024-07-26 12:13:47.765484] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:59.992 [2024-07-26 12:13:47.765493] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.992 [2024-07-26 12:13:47.765506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:59.992 [2024-07-26 12:13:47.765516] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:59.992 [2024-07-26 12:13:47.765527] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.992 [2024-07-26 12:13:47.765536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:59.992 [2024-07-26 12:13:47.765550] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:59.992 [2024-07-26 12:13:47.765559] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.992 [2024-07-26 12:13:47.765571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:59.992 [2024-07-26 12:13:47.765580] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:59.992 [2024-07-26 12:13:47.765591] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.992 [2024-07-26 12:13:47.765600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:59.992 [2024-07-26 12:13:47.765611] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:59.992 [2024-07-26 12:13:47.765620] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.992 [2024-07-26 12:13:47.765640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:59.992 [2024-07-26 12:13:47.765649] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:59.992 [2024-07-26 12:13:47.765678] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.992 [2024-07-26 12:13:47.765687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:59.992 [2024-07-26 12:13:47.765699] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:59.992 [2024-07-26 12:13:47.765709] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.992 [2024-07-26 12:13:47.765721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:59.992 [2024-07-26 12:13:47.765730] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:59.992 [2024-07-26 12:13:47.765744] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.992 [2024-07-26 12:13:47.765754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:59.992 [2024-07-26 12:13:47.765766] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:59.992 [2024-07-26 12:13:47.765775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.992 [2024-07-26 12:13:47.765787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:59.992 [2024-07-26 12:13:47.765796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:59.992 [2024-07-26 12:13:47.765810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.992 [2024-07-26 12:13:47.765819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:59.992 [2024-07-26 12:13:47.765832] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:59.992 [2024-07-26 12:13:47.765841] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.992 [2024-07-26 12:13:47.765853] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:59.992 [2024-07-26 12:13:47.765863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:59.992 [2024-07-26 12:13:47.765875] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:59.992 [2024-07-26 12:13:47.765887] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.992 [2024-07-26 12:13:47.765904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:59.992 [2024-07-26 12:13:47.765915] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:59.992 [2024-07-26 12:13:47.765929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:59.992 [2024-07-26 12:13:47.765939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:59.992 [2024-07-26 12:13:47.765951] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:59.992 [2024-07-26 12:13:47.765960] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:59.992 [2024-07-26 12:13:47.765977] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:59.992 [2024-07-26 12:13:47.765989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.992 [2024-07-26 12:13:47.766004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:59.992 [2024-07-26 12:13:47.766015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:59.992 [2024-07-26 12:13:47.766028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:59.992 [2024-07-26 12:13:47.766039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:59.992 [2024-07-26 12:13:47.766052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:59.992 [2024-07-26 12:13:47.766062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:59.992 [2024-07-26 12:13:47.766076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:59.992 [2024-07-26 12:13:47.766086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:59.992 [2024-07-26 12:13:47.766101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:59.992 [2024-07-26 12:13:47.766112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:59.992 [2024-07-26 12:13:47.766128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:59.992 [2024-07-26 12:13:47.766149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:59.992 [2024-07-26 12:13:47.766162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:59.992 [2024-07-26 12:13:47.766173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:59.992 [2024-07-26 12:13:47.766186] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:59.992 [2024-07-26 12:13:47.766198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.992 [2024-07-26 12:13:47.766213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:59.992 [2024-07-26 12:13:47.766224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:59.992 [2024-07-26 12:13:47.766237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:59.992 [2024-07-26 12:13:47.766248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:59.992 [2024-07-26 12:13:47.766261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.992 [2024-07-26 12:13:47.766272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:59.992 [2024-07-26 12:13:47.766285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:19:59.992 [2024-07-26 12:13:47.766297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.992 [2024-07-26 12:13:47.766392] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:59.992 [2024-07-26 12:13:47.766406] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:04.178 [2024-07-26 12:13:51.256502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.178 [2024-07-26 12:13:51.256565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:04.178 [2024-07-26 12:13:51.256585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3495.770 ms 00:20:04.178 [2024-07-26 12:13:51.256596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.178 [2024-07-26 12:13:51.301569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.178 [2024-07-26 12:13:51.301634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:04.178 [2024-07-26 12:13:51.301653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.681 ms 00:20:04.178 [2024-07-26 12:13:51.301664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.178 [2024-07-26 12:13:51.301836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.178 [2024-07-26 12:13:51.301855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:04.178 [2024-07-26 12:13:51.301869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:04.178 [2024-07-26 12:13:51.301879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.363889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.363946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:04.179 [2024-07-26 12:13:51.363968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.065 ms 00:20:04.179 [2024-07-26 12:13:51.363981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.364152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.364169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:04.179 [2024-07-26 12:13:51.364189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:04.179 [2024-07-26 12:13:51.364205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.364684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.364707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:04.179 [2024-07-26 12:13:51.364724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:20:04.179 [2024-07-26 12:13:51.364737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.364879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.364894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:04.179 [2024-07-26 12:13:51.364911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:04.179 [2024-07-26 12:13:51.364923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.390130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.390191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:04.179 [2024-07-26 12:13:51.390211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.189 ms 00:20:04.179 [2024-07-26 12:13:51.390225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.405896] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:04.179 [2024-07-26 12:13:51.422768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.422832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:04.179 [2024-07-26 12:13:51.422848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.445 ms 00:20:04.179 [2024-07-26 12:13:51.422861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.520583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.520660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:04.179 [2024-07-26 12:13:51.520678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.761 ms 00:20:04.179 [2024-07-26 12:13:51.520691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.520960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.520978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:04.179 [2024-07-26 12:13:51.520990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:20:04.179 [2024-07-26 12:13:51.521010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.560746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.560812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:04.179 [2024-07-26 12:13:51.560828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.764 ms 00:20:04.179 [2024-07-26 12:13:51.560842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.598353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.598435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:04.179 [2024-07-26 12:13:51.598453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.462 ms 00:20:04.179 [2024-07-26 12:13:51.598466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.599359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.599388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:04.179 [2024-07-26 12:13:51.599400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:20:04.179 [2024-07-26 12:13:51.599413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.717953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.718021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:04.179 [2024-07-26 12:13:51.718037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.687 ms 00:20:04.179 [2024-07-26 12:13:51.718054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.757604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.757687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:04.179 [2024-07-26 12:13:51.757707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.499 ms 00:20:04.179 [2024-07-26 12:13:51.757720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.796806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.796885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:04.179 [2024-07-26 12:13:51.796901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.017 ms 00:20:04.179 [2024-07-26 12:13:51.796914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.837911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.837980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:04.179 [2024-07-26 12:13:51.837997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.965 ms 00:20:04.179 [2024-07-26 12:13:51.838009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.838150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.838167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:04.179 [2024-07-26 12:13:51.838180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:04.179 [2024-07-26 12:13:51.838196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.838280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.179 [2024-07-26 12:13:51.838295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:04.179 [2024-07-26 12:13:51.838305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:04.179 [2024-07-26 12:13:51.838335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.179 [2024-07-26 12:13:51.839411] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:04.179 [2024-07-26 12:13:51.844969] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4101.696 ms, result 0 00:20:04.179 [2024-07-26 12:13:51.846017] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:04.179 { 00:20:04.179 "name": "ftl0", 00:20:04.179 "uuid": "bc106026-f60a-43f0-978e-e006bfb6b3f6" 00:20:04.179 } 00:20:04.179 12:13:51 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:04.179 12:13:51 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:20:04.179 12:13:51 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:04.179 12:13:51 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:20:04.179 12:13:51 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:04.179 12:13:51 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:04.179 12:13:51 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:04.179 12:13:52 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:04.438 [ 00:20:04.438 { 00:20:04.438 "name": "ftl0", 00:20:04.438 "aliases": [ 00:20:04.438 "bc106026-f60a-43f0-978e-e006bfb6b3f6" 00:20:04.438 ], 00:20:04.438 "product_name": "FTL disk", 00:20:04.438 "block_size": 4096, 00:20:04.438 "num_blocks": 23592960, 00:20:04.439 "uuid": "bc106026-f60a-43f0-978e-e006bfb6b3f6", 00:20:04.439 "assigned_rate_limits": { 00:20:04.439 "rw_ios_per_sec": 0, 00:20:04.439 "rw_mbytes_per_sec": 0, 00:20:04.439 "r_mbytes_per_sec": 0, 00:20:04.439 "w_mbytes_per_sec": 0 00:20:04.439 }, 00:20:04.439 "claimed": false, 00:20:04.439 "zoned": false, 00:20:04.439 "supported_io_types": { 00:20:04.439 "read": true, 00:20:04.439 "write": true, 00:20:04.439 "unmap": true, 00:20:04.439 "flush": true, 00:20:04.439 "reset": false, 00:20:04.439 "nvme_admin": false, 00:20:04.439 "nvme_io": false, 00:20:04.439 "nvme_io_md": false, 00:20:04.439 "write_zeroes": true, 00:20:04.439 "zcopy": false, 00:20:04.439 "get_zone_info": false, 00:20:04.439 "zone_management": false, 00:20:04.439 "zone_append": false, 00:20:04.439 "compare": false, 00:20:04.439 "compare_and_write": false, 00:20:04.439 "abort": false, 00:20:04.439 "seek_hole": false, 00:20:04.439 "seek_data": false, 00:20:04.439 "copy": false, 00:20:04.439 "nvme_iov_md": false 00:20:04.439 }, 00:20:04.439 "driver_specific": { 00:20:04.439 "ftl": { 00:20:04.439 "base_bdev": "75450329-7622-46e7-b9a7-9f1f2ad8eac3", 00:20:04.439 "cache": "nvc0n1p0" 00:20:04.439 } 00:20:04.439 } 00:20:04.439 } 00:20:04.439 ] 00:20:04.439 12:13:52 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:20:04.439 12:13:52 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:04.439 12:13:52 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:04.698 12:13:52 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:04.698 12:13:52 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:04.698 12:13:52 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:04.698 { 00:20:04.698 "name": "ftl0", 00:20:04.698 "aliases": [ 00:20:04.698 "bc106026-f60a-43f0-978e-e006bfb6b3f6" 00:20:04.698 ], 00:20:04.698 "product_name": "FTL disk", 00:20:04.698 "block_size": 4096, 00:20:04.698 "num_blocks": 23592960, 00:20:04.698 "uuid": "bc106026-f60a-43f0-978e-e006bfb6b3f6", 00:20:04.698 "assigned_rate_limits": { 00:20:04.698 "rw_ios_per_sec": 0, 00:20:04.698 "rw_mbytes_per_sec": 0, 00:20:04.698 "r_mbytes_per_sec": 0, 00:20:04.698 "w_mbytes_per_sec": 0 00:20:04.698 }, 00:20:04.698 "claimed": false, 00:20:04.698 "zoned": false, 00:20:04.698 "supported_io_types": { 00:20:04.698 "read": true, 00:20:04.698 "write": true, 00:20:04.698 "unmap": true, 00:20:04.698 "flush": true, 00:20:04.698 "reset": false, 00:20:04.698 "nvme_admin": false, 00:20:04.698 "nvme_io": false, 00:20:04.698 "nvme_io_md": false, 00:20:04.698 "write_zeroes": true, 00:20:04.698 "zcopy": false, 00:20:04.698 "get_zone_info": false, 00:20:04.698 "zone_management": false, 00:20:04.698 "zone_append": false, 00:20:04.698 "compare": false, 00:20:04.698 "compare_and_write": false, 00:20:04.698 "abort": false, 00:20:04.698 "seek_hole": false, 00:20:04.698 "seek_data": false, 00:20:04.698 "copy": false, 00:20:04.698 "nvme_iov_md": false 00:20:04.698 }, 00:20:04.698 "driver_specific": { 00:20:04.698 "ftl": { 00:20:04.698 "base_bdev": "75450329-7622-46e7-b9a7-9f1f2ad8eac3", 00:20:04.698 "cache": "nvc0n1p0" 00:20:04.698 } 00:20:04.698 } 00:20:04.698 } 00:20:04.698 ]' 00:20:04.698 12:13:52 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:04.959 12:13:52 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:04.959 12:13:52 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:04.959 [2024-07-26 12:13:52.853847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.959 [2024-07-26 12:13:52.853904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:04.959 [2024-07-26 12:13:52.853921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:04.959 [2024-07-26 12:13:52.853932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.959 [2024-07-26 12:13:52.853975] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:04.959 [2024-07-26 12:13:52.857729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.959 [2024-07-26 12:13:52.857765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:04.959 [2024-07-26 12:13:52.857778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.743 ms 00:20:04.959 [2024-07-26 12:13:52.857794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.959 [2024-07-26 12:13:52.858345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.959 [2024-07-26 12:13:52.858370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:04.959 [2024-07-26 12:13:52.858387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:20:04.959 [2024-07-26 12:13:52.858399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.959 [2024-07-26 12:13:52.861282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.959 [2024-07-26 12:13:52.861306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:04.959 [2024-07-26 12:13:52.861317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.853 ms 00:20:04.959 [2024-07-26 12:13:52.861330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.959 [2024-07-26 12:13:52.867028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.959 [2024-07-26 12:13:52.867067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:04.959 [2024-07-26 12:13:52.867080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.652 ms 00:20:04.959 [2024-07-26 12:13:52.867095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.959 [2024-07-26 12:13:52.906451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.959 [2024-07-26 12:13:52.906513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:04.959 [2024-07-26 12:13:52.906529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.303 ms 00:20:04.959 [2024-07-26 12:13:52.906545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.959 [2024-07-26 12:13:52.930514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.959 [2024-07-26 12:13:52.930577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:04.959 [2024-07-26 12:13:52.930592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.906 ms 00:20:04.959 [2024-07-26 12:13:52.930605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.959 [2024-07-26 12:13:52.930837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.959 [2024-07-26 12:13:52.930854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:04.959 [2024-07-26 12:13:52.930866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:20:04.959 [2024-07-26 12:13:52.930878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.221 [2024-07-26 12:13:52.970201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.221 [2024-07-26 12:13:52.970257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:05.221 [2024-07-26 12:13:52.970272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.352 ms 00:20:05.221 [2024-07-26 12:13:52.970285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.221 [2024-07-26 12:13:53.008411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.221 [2024-07-26 12:13:53.008473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:05.221 [2024-07-26 12:13:53.008489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.074 ms 00:20:05.221 [2024-07-26 12:13:53.008505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.221 [2024-07-26 12:13:53.046696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.221 [2024-07-26 12:13:53.046779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:05.221 [2024-07-26 12:13:53.046797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.153 ms 00:20:05.221 [2024-07-26 12:13:53.046810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.221 [2024-07-26 12:13:53.084677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.221 [2024-07-26 12:13:53.084736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:05.221 [2024-07-26 12:13:53.084751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.764 ms 00:20:05.221 [2024-07-26 12:13:53.084763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.221 [2024-07-26 12:13:53.084853] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:05.221 [2024-07-26 12:13:53.084875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.084888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.084902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.084913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.084926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.084937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.084953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.084964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.084977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.084988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.085000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.085011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.085025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.085035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.085048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.085059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.085072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:05.221 [2024-07-26 12:13:53.085082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.085994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.086004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.086017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.086028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.086041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.086051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.086065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.086075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.086088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.086098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.086110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.086130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:05.222 [2024-07-26 12:13:53.086154] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:05.222 [2024-07-26 12:13:53.086164] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc106026-f60a-43f0-978e-e006bfb6b3f6 00:20:05.222 [2024-07-26 12:13:53.086183] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:05.222 [2024-07-26 12:13:53.086193] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:05.222 [2024-07-26 12:13:53.086205] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:05.222 [2024-07-26 12:13:53.086215] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:05.222 [2024-07-26 12:13:53.086227] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:05.222 [2024-07-26 12:13:53.086239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:05.223 [2024-07-26 12:13:53.086252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:05.223 [2024-07-26 12:13:53.086261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:05.223 [2024-07-26 12:13:53.086272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:05.223 [2024-07-26 12:13:53.086282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.223 [2024-07-26 12:13:53.086294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:05.223 [2024-07-26 12:13:53.086306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:20:05.223 [2024-07-26 12:13:53.086318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.223 [2024-07-26 12:13:53.106310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.223 [2024-07-26 12:13:53.106361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:05.223 [2024-07-26 12:13:53.106375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.987 ms 00:20:05.223 [2024-07-26 12:13:53.106391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.223 [2024-07-26 12:13:53.106977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.223 [2024-07-26 12:13:53.106992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:05.223 [2024-07-26 12:13:53.107003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:20:05.223 [2024-07-26 12:13:53.107031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.223 [2024-07-26 12:13:53.176697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.223 [2024-07-26 12:13:53.176767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:05.223 [2024-07-26 12:13:53.176782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.223 [2024-07-26 12:13:53.176796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.223 [2024-07-26 12:13:53.176953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.223 [2024-07-26 12:13:53.176969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:05.223 [2024-07-26 12:13:53.176980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.223 [2024-07-26 12:13:53.176993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.223 [2024-07-26 12:13:53.177071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.223 [2024-07-26 12:13:53.177087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:05.223 [2024-07-26 12:13:53.177098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.223 [2024-07-26 12:13:53.177114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.223 [2024-07-26 12:13:53.177167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.223 [2024-07-26 12:13:53.177187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:05.223 [2024-07-26 12:13:53.177197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.223 [2024-07-26 12:13:53.177210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.481 [2024-07-26 12:13:53.307934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.481 [2024-07-26 12:13:53.308000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:05.481 [2024-07-26 12:13:53.308015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.481 [2024-07-26 12:13:53.308029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.481 [2024-07-26 12:13:53.409907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.481 [2024-07-26 12:13:53.409970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:05.481 [2024-07-26 12:13:53.409985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.481 [2024-07-26 12:13:53.409998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.481 [2024-07-26 12:13:53.410141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.481 [2024-07-26 12:13:53.410157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:05.481 [2024-07-26 12:13:53.410168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.481 [2024-07-26 12:13:53.410199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.481 [2024-07-26 12:13:53.410259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.481 [2024-07-26 12:13:53.410272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:05.481 [2024-07-26 12:13:53.410282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.481 [2024-07-26 12:13:53.410294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.481 [2024-07-26 12:13:53.410421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.481 [2024-07-26 12:13:53.410441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:05.481 [2024-07-26 12:13:53.410465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.481 [2024-07-26 12:13:53.410478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.481 [2024-07-26 12:13:53.410535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.481 [2024-07-26 12:13:53.410550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:05.481 [2024-07-26 12:13:53.410560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.481 [2024-07-26 12:13:53.410572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.481 [2024-07-26 12:13:53.410624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.481 [2024-07-26 12:13:53.410640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:05.481 [2024-07-26 12:13:53.410650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.481 [2024-07-26 12:13:53.410665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.481 [2024-07-26 12:13:53.410719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.481 [2024-07-26 12:13:53.410733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:05.481 [2024-07-26 12:13:53.410742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.481 [2024-07-26 12:13:53.410754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.481 [2024-07-26 12:13:53.410940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 557.994 ms, result 0 00:20:05.481 true 00:20:05.481 12:13:53 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 79389 00:20:05.481 12:13:53 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79389 ']' 00:20:05.481 12:13:53 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79389 00:20:05.481 12:13:53 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:20:05.481 12:13:53 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:05.481 12:13:53 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79389 00:20:05.740 killing process with pid 79389 00:20:05.740 12:13:53 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:05.740 12:13:53 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:05.740 12:13:53 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79389' 00:20:05.740 12:13:53 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79389 00:20:05.740 12:13:53 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79389 00:20:09.026 12:13:56 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:09.592 65536+0 records in 00:20:09.592 65536+0 records out 00:20:09.592 268435456 bytes (268 MB, 256 MiB) copied, 0.980686 s, 274 MB/s 00:20:09.592 12:13:57 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:09.592 [2024-07-26 12:13:57.451692] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:20:09.592 [2024-07-26 12:13:57.451815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79601 ] 00:20:09.851 [2024-07-26 12:13:57.622651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.153 [2024-07-26 12:13:57.851876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.411 [2024-07-26 12:13:58.254638] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:10.411 [2024-07-26 12:13:58.254716] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:10.669 [2024-07-26 12:13:58.416693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.669 [2024-07-26 12:13:58.416752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:10.669 [2024-07-26 12:13:58.416769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:10.669 [2024-07-26 12:13:58.416779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.669 [2024-07-26 12:13:58.420048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.669 [2024-07-26 12:13:58.420091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:10.669 [2024-07-26 12:13:58.420104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.252 ms 00:20:10.669 [2024-07-26 12:13:58.420114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.669 [2024-07-26 12:13:58.420234] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:10.669 [2024-07-26 12:13:58.421444] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:10.669 [2024-07-26 12:13:58.421479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.669 [2024-07-26 12:13:58.421490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:10.669 [2024-07-26 12:13:58.421501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:20:10.669 [2024-07-26 12:13:58.421511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.669 [2024-07-26 12:13:58.422973] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:10.669 [2024-07-26 12:13:58.443905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.669 [2024-07-26 12:13:58.443945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:10.669 [2024-07-26 12:13:58.443965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.966 ms 00:20:10.669 [2024-07-26 12:13:58.443975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.669 [2024-07-26 12:13:58.444074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.669 [2024-07-26 12:13:58.444088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:10.670 [2024-07-26 12:13:58.444099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:10.670 [2024-07-26 12:13:58.444109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.670 [2024-07-26 12:13:58.450778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.670 [2024-07-26 12:13:58.450808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:10.670 [2024-07-26 12:13:58.450819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.615 ms 00:20:10.670 [2024-07-26 12:13:58.450829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.670 [2024-07-26 12:13:58.450923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.670 [2024-07-26 12:13:58.450938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:10.670 [2024-07-26 12:13:58.450949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:10.670 [2024-07-26 12:13:58.450960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.670 [2024-07-26 12:13:58.450991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.670 [2024-07-26 12:13:58.451002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:10.670 [2024-07-26 12:13:58.451015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:10.670 [2024-07-26 12:13:58.451025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.670 [2024-07-26 12:13:58.451047] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:10.670 [2024-07-26 12:13:58.456750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.670 [2024-07-26 12:13:58.456783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:10.670 [2024-07-26 12:13:58.456795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.718 ms 00:20:10.670 [2024-07-26 12:13:58.456805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.670 [2024-07-26 12:13:58.456873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.670 [2024-07-26 12:13:58.456887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:10.670 [2024-07-26 12:13:58.456897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:10.670 [2024-07-26 12:13:58.456907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.670 [2024-07-26 12:13:58.456927] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:10.670 [2024-07-26 12:13:58.456949] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:10.670 [2024-07-26 12:13:58.456986] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:10.670 [2024-07-26 12:13:58.457003] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:10.670 [2024-07-26 12:13:58.457085] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:10.670 [2024-07-26 12:13:58.457098] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:10.670 [2024-07-26 12:13:58.457111] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:10.670 [2024-07-26 12:13:58.457142] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:10.670 [2024-07-26 12:13:58.457154] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:10.670 [2024-07-26 12:13:58.457169] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:10.670 [2024-07-26 12:13:58.457179] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:10.670 [2024-07-26 12:13:58.457189] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:10.670 [2024-07-26 12:13:58.457199] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:10.670 [2024-07-26 12:13:58.457209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.670 [2024-07-26 12:13:58.457219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:10.670 [2024-07-26 12:13:58.457230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:20:10.670 [2024-07-26 12:13:58.457240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.670 [2024-07-26 12:13:58.457328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.670 [2024-07-26 12:13:58.457339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:10.670 [2024-07-26 12:13:58.457352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:10.670 [2024-07-26 12:13:58.457362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.670 [2024-07-26 12:13:58.457446] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:10.670 [2024-07-26 12:13:58.457458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:10.670 [2024-07-26 12:13:58.457468] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:10.670 [2024-07-26 12:13:58.457478] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.670 [2024-07-26 12:13:58.457488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:10.670 [2024-07-26 12:13:58.457497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:10.670 [2024-07-26 12:13:58.457506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:10.670 [2024-07-26 12:13:58.457516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:10.670 [2024-07-26 12:13:58.457525] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:10.670 [2024-07-26 12:13:58.457534] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:10.670 [2024-07-26 12:13:58.457543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:10.670 [2024-07-26 12:13:58.457552] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:10.670 [2024-07-26 12:13:58.457563] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:10.670 [2024-07-26 12:13:58.457572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:10.670 [2024-07-26 12:13:58.457581] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:10.670 [2024-07-26 12:13:58.457589] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.670 [2024-07-26 12:13:58.457598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:10.670 [2024-07-26 12:13:58.457607] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:10.670 [2024-07-26 12:13:58.457634] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.670 [2024-07-26 12:13:58.457644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:10.670 [2024-07-26 12:13:58.457653] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:10.670 [2024-07-26 12:13:58.457662] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.670 [2024-07-26 12:13:58.457670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:10.670 [2024-07-26 12:13:58.457679] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:10.670 [2024-07-26 12:13:58.457688] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.670 [2024-07-26 12:13:58.457697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:10.670 [2024-07-26 12:13:58.457706] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:10.670 [2024-07-26 12:13:58.457715] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.670 [2024-07-26 12:13:58.457724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:10.670 [2024-07-26 12:13:58.457734] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:10.670 [2024-07-26 12:13:58.457743] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.670 [2024-07-26 12:13:58.457752] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:10.670 [2024-07-26 12:13:58.457761] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:10.670 [2024-07-26 12:13:58.457769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:10.670 [2024-07-26 12:13:58.457778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:10.670 [2024-07-26 12:13:58.457787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:10.670 [2024-07-26 12:13:58.457795] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:10.670 [2024-07-26 12:13:58.457804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:10.670 [2024-07-26 12:13:58.457813] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:10.670 [2024-07-26 12:13:58.457821] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.670 [2024-07-26 12:13:58.457830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:10.670 [2024-07-26 12:13:58.457840] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:10.670 [2024-07-26 12:13:58.457848] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.670 [2024-07-26 12:13:58.457856] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:10.670 [2024-07-26 12:13:58.457867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:10.670 [2024-07-26 12:13:58.457876] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:10.670 [2024-07-26 12:13:58.457885] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.670 [2024-07-26 12:13:58.457898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:10.670 [2024-07-26 12:13:58.457908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:10.670 [2024-07-26 12:13:58.457917] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:10.670 [2024-07-26 12:13:58.457926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:10.670 [2024-07-26 12:13:58.457935] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:10.670 [2024-07-26 12:13:58.457944] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:10.670 [2024-07-26 12:13:58.457954] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:10.670 [2024-07-26 12:13:58.457966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:10.670 [2024-07-26 12:13:58.457977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:10.670 [2024-07-26 12:13:58.457987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:10.670 [2024-07-26 12:13:58.457997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:10.671 [2024-07-26 12:13:58.458006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:10.671 [2024-07-26 12:13:58.458016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:10.671 [2024-07-26 12:13:58.458027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:10.671 [2024-07-26 12:13:58.458037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:10.671 [2024-07-26 12:13:58.458046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:10.671 [2024-07-26 12:13:58.458056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:10.671 [2024-07-26 12:13:58.458066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:10.671 [2024-07-26 12:13:58.458076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:10.671 [2024-07-26 12:13:58.458086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:10.671 [2024-07-26 12:13:58.458096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:10.671 [2024-07-26 12:13:58.458106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:10.671 [2024-07-26 12:13:58.458116] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:10.671 [2024-07-26 12:13:58.458137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:10.671 [2024-07-26 12:13:58.458148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:10.671 [2024-07-26 12:13:58.458159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:10.671 [2024-07-26 12:13:58.458169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:10.671 [2024-07-26 12:13:58.458179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:10.671 [2024-07-26 12:13:58.458190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.671 [2024-07-26 12:13:58.458200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:10.671 [2024-07-26 12:13:58.458210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:20:10.671 [2024-07-26 12:13:58.458219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.671 [2024-07-26 12:13:58.512610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.671 [2024-07-26 12:13:58.512657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:10.671 [2024-07-26 12:13:58.512676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.425 ms 00:20:10.671 [2024-07-26 12:13:58.512686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.671 [2024-07-26 12:13:58.512851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.671 [2024-07-26 12:13:58.512867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:10.671 [2024-07-26 12:13:58.512879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:10.671 [2024-07-26 12:13:58.512889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.671 [2024-07-26 12:13:58.564973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.671 [2024-07-26 12:13:58.565018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:10.671 [2024-07-26 12:13:58.565032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.143 ms 00:20:10.671 [2024-07-26 12:13:58.565046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.671 [2024-07-26 12:13:58.565153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.671 [2024-07-26 12:13:58.565167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:10.671 [2024-07-26 12:13:58.565178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:10.671 [2024-07-26 12:13:58.565188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.671 [2024-07-26 12:13:58.565620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.671 [2024-07-26 12:13:58.565643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:10.671 [2024-07-26 12:13:58.565654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:20:10.671 [2024-07-26 12:13:58.565664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.671 [2024-07-26 12:13:58.565787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.671 [2024-07-26 12:13:58.565806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:10.671 [2024-07-26 12:13:58.565816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:10.671 [2024-07-26 12:13:58.565827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.671 [2024-07-26 12:13:58.587352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.671 [2024-07-26 12:13:58.587390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:10.671 [2024-07-26 12:13:58.587404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.537 ms 00:20:10.671 [2024-07-26 12:13:58.587414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.671 [2024-07-26 12:13:58.608147] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:10.671 [2024-07-26 12:13:58.608201] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:10.671 [2024-07-26 12:13:58.608217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.671 [2024-07-26 12:13:58.608228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:10.671 [2024-07-26 12:13:58.608240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.701 ms 00:20:10.671 [2024-07-26 12:13:58.608249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.671 [2024-07-26 12:13:58.639117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.671 [2024-07-26 12:13:58.639163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:10.671 [2024-07-26 12:13:58.639177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.833 ms 00:20:10.671 [2024-07-26 12:13:58.639187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.930 [2024-07-26 12:13:58.658387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.930 [2024-07-26 12:13:58.658434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:10.930 [2024-07-26 12:13:58.658449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.142 ms 00:20:10.930 [2024-07-26 12:13:58.658459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.930 [2024-07-26 12:13:58.678000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.930 [2024-07-26 12:13:58.678039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:10.930 [2024-07-26 12:13:58.678053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.468 ms 00:20:10.930 [2024-07-26 12:13:58.678063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.930 [2024-07-26 12:13:58.678914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.930 [2024-07-26 12:13:58.678949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:10.930 [2024-07-26 12:13:58.678961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:20:10.930 [2024-07-26 12:13:58.678971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.930 [2024-07-26 12:13:58.765243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.930 [2024-07-26 12:13:58.765316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:10.930 [2024-07-26 12:13:58.765334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.381 ms 00:20:10.930 [2024-07-26 12:13:58.765344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.930 [2024-07-26 12:13:58.778008] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:10.930 [2024-07-26 12:13:58.794685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.930 [2024-07-26 12:13:58.794731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:10.930 [2024-07-26 12:13:58.794745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.220 ms 00:20:10.930 [2024-07-26 12:13:58.794756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.930 [2024-07-26 12:13:58.794866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.930 [2024-07-26 12:13:58.794879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:10.930 [2024-07-26 12:13:58.794894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:10.930 [2024-07-26 12:13:58.794905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.930 [2024-07-26 12:13:58.794958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.930 [2024-07-26 12:13:58.794969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:10.930 [2024-07-26 12:13:58.794980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:10.930 [2024-07-26 12:13:58.794990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.930 [2024-07-26 12:13:58.795012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.930 [2024-07-26 12:13:58.795023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:10.930 [2024-07-26 12:13:58.795033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:10.930 [2024-07-26 12:13:58.795047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.930 [2024-07-26 12:13:58.795082] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:10.930 [2024-07-26 12:13:58.795094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.930 [2024-07-26 12:13:58.795103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:10.930 [2024-07-26 12:13:58.795114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:10.930 [2024-07-26 12:13:58.795149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.930 [2024-07-26 12:13:58.831709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.930 [2024-07-26 12:13:58.831752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:10.930 [2024-07-26 12:13:58.831773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.595 ms 00:20:10.930 [2024-07-26 12:13:58.831784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.930 [2024-07-26 12:13:58.831901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.930 [2024-07-26 12:13:58.831915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:10.930 [2024-07-26 12:13:58.831927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:10.930 [2024-07-26 12:13:58.831936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.930 [2024-07-26 12:13:58.832830] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:10.930 [2024-07-26 12:13:58.837946] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 416.525 ms, result 0 00:20:10.930 [2024-07-26 12:13:58.838756] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:10.930 [2024-07-26 12:13:58.857162] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:20.562  Copying: 27/256 [MB] (27 MBps) Copying: 53/256 [MB] (26 MBps) Copying: 80/256 [MB] (26 MBps) Copying: 105/256 [MB] (25 MBps) Copying: 131/256 [MB] (26 MBps) Copying: 157/256 [MB] (25 MBps) Copying: 184/256 [MB] (26 MBps) Copying: 211/256 [MB] (27 MBps) Copying: 238/256 [MB] (27 MBps) Copying: 256/256 [MB] (average 26 MBps)[2024-07-26 12:14:08.501983] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:20.562 [2024-07-26 12:14:08.516353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.562 [2024-07-26 12:14:08.516419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:20.562 [2024-07-26 12:14:08.516436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:20.562 [2024-07-26 12:14:08.516446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.562 [2024-07-26 12:14:08.516475] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:20.562 [2024-07-26 12:14:08.520131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.562 [2024-07-26 12:14:08.520175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:20.562 [2024-07-26 12:14:08.520188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.645 ms 00:20:20.562 [2024-07-26 12:14:08.520199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.562 [2024-07-26 12:14:08.522105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.562 [2024-07-26 12:14:08.522169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:20.562 [2024-07-26 12:14:08.522183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.880 ms 00:20:20.562 [2024-07-26 12:14:08.522193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.562 [2024-07-26 12:14:08.529516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.562 [2024-07-26 12:14:08.529556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:20.562 [2024-07-26 12:14:08.529569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.312 ms 00:20:20.562 [2024-07-26 12:14:08.529585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.562 [2024-07-26 12:14:08.535306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.562 [2024-07-26 12:14:08.535341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:20.562 [2024-07-26 12:14:08.535354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.660 ms 00:20:20.562 [2024-07-26 12:14:08.535364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.823 [2024-07-26 12:14:08.574551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.823 [2024-07-26 12:14:08.574618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:20.823 [2024-07-26 12:14:08.574634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.195 ms 00:20:20.823 [2024-07-26 12:14:08.574645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.823 [2024-07-26 12:14:08.597748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.823 [2024-07-26 12:14:08.597817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:20.823 [2024-07-26 12:14:08.597834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.039 ms 00:20:20.823 [2024-07-26 12:14:08.597845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.823 [2024-07-26 12:14:08.598044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.823 [2024-07-26 12:14:08.598058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:20.823 [2024-07-26 12:14:08.598069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:20.823 [2024-07-26 12:14:08.598079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.823 [2024-07-26 12:14:08.638376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.823 [2024-07-26 12:14:08.638430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:20.823 [2024-07-26 12:14:08.638446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.341 ms 00:20:20.823 [2024-07-26 12:14:08.638456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.823 [2024-07-26 12:14:08.678550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.823 [2024-07-26 12:14:08.678620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:20.823 [2024-07-26 12:14:08.678637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.064 ms 00:20:20.823 [2024-07-26 12:14:08.678648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.823 [2024-07-26 12:14:08.717882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.823 [2024-07-26 12:14:08.717948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:20.823 [2024-07-26 12:14:08.717965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.187 ms 00:20:20.823 [2024-07-26 12:14:08.717975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.823 [2024-07-26 12:14:08.759312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.823 [2024-07-26 12:14:08.759375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:20.823 [2024-07-26 12:14:08.759393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.271 ms 00:20:20.823 [2024-07-26 12:14:08.759403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.823 [2024-07-26 12:14:08.759493] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:20.823 [2024-07-26 12:14:08.759514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:20.823 [2024-07-26 12:14:08.759897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.759908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.759918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.759929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.759939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.759949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.759960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.759971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.759981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.759992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:20.824 [2024-07-26 12:14:08.760625] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:20.824 [2024-07-26 12:14:08.760635] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc106026-f60a-43f0-978e-e006bfb6b3f6 00:20:20.824 [2024-07-26 12:14:08.760647] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:20.824 [2024-07-26 12:14:08.760656] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:20.824 [2024-07-26 12:14:08.760666] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:20.824 [2024-07-26 12:14:08.760688] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:20.824 [2024-07-26 12:14:08.760698] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:20.824 [2024-07-26 12:14:08.760708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:20.824 [2024-07-26 12:14:08.760719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:20.824 [2024-07-26 12:14:08.760728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:20.824 [2024-07-26 12:14:08.760737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:20.824 [2024-07-26 12:14:08.760747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.824 [2024-07-26 12:14:08.760757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:20.824 [2024-07-26 12:14:08.760767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:20:20.824 [2024-07-26 12:14:08.760780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.824 [2024-07-26 12:14:08.781903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.824 [2024-07-26 12:14:08.781949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:20.824 [2024-07-26 12:14:08.781962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.133 ms 00:20:20.824 [2024-07-26 12:14:08.781988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.825 [2024-07-26 12:14:08.782646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.825 [2024-07-26 12:14:08.782663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:20.825 [2024-07-26 12:14:08.782681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:20:20.825 [2024-07-26 12:14:08.782691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.084 [2024-07-26 12:14:08.830998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.084 [2024-07-26 12:14:08.831049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:21.084 [2024-07-26 12:14:08.831062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.084 [2024-07-26 12:14:08.831089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.084 [2024-07-26 12:14:08.831203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.084 [2024-07-26 12:14:08.831216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:21.084 [2024-07-26 12:14:08.831230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.084 [2024-07-26 12:14:08.831240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.084 [2024-07-26 12:14:08.831295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.084 [2024-07-26 12:14:08.831308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:21.084 [2024-07-26 12:14:08.831319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.084 [2024-07-26 12:14:08.831328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.084 [2024-07-26 12:14:08.831347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.084 [2024-07-26 12:14:08.831358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:21.084 [2024-07-26 12:14:08.831367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.084 [2024-07-26 12:14:08.831381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.084 [2024-07-26 12:14:08.951471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.084 [2024-07-26 12:14:08.951535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:21.084 [2024-07-26 12:14:08.951551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.084 [2024-07-26 12:14:08.951562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.084 [2024-07-26 12:14:09.054332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.084 [2024-07-26 12:14:09.054396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:21.084 [2024-07-26 12:14:09.054416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.084 [2024-07-26 12:14:09.054427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.084 [2024-07-26 12:14:09.054515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.084 [2024-07-26 12:14:09.054527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:21.084 [2024-07-26 12:14:09.054538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.084 [2024-07-26 12:14:09.054548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.084 [2024-07-26 12:14:09.054578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.084 [2024-07-26 12:14:09.054588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:21.084 [2024-07-26 12:14:09.054598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.084 [2024-07-26 12:14:09.054608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.084 [2024-07-26 12:14:09.054709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.084 [2024-07-26 12:14:09.054722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:21.084 [2024-07-26 12:14:09.054733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.084 [2024-07-26 12:14:09.054743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.084 [2024-07-26 12:14:09.054781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.084 [2024-07-26 12:14:09.054793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:21.084 [2024-07-26 12:14:09.054802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.084 [2024-07-26 12:14:09.054813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.084 [2024-07-26 12:14:09.054855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.084 [2024-07-26 12:14:09.054867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:21.084 [2024-07-26 12:14:09.054877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.084 [2024-07-26 12:14:09.054886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.084 [2024-07-26 12:14:09.054931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.084 [2024-07-26 12:14:09.054943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:21.084 [2024-07-26 12:14:09.054953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.084 [2024-07-26 12:14:09.054962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.084 [2024-07-26 12:14:09.055099] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 539.630 ms, result 0 00:20:22.991 00:20:22.991 00:20:22.991 12:14:10 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79731 00:20:22.991 12:14:10 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79731 00:20:22.991 12:14:10 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79731 ']' 00:20:22.991 12:14:10 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:22.991 12:14:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.991 12:14:10 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:22.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.991 12:14:10 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.991 12:14:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:22.991 12:14:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:22.991 [2024-07-26 12:14:10.601211] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:20:22.991 [2024-07-26 12:14:10.601343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79731 ] 00:20:22.991 [2024-07-26 12:14:10.771314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.250 [2024-07-26 12:14:11.005945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.187 12:14:11 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:24.187 12:14:11 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:20:24.187 12:14:11 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:24.187 [2024-07-26 12:14:12.098630] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:24.187 [2024-07-26 12:14:12.098697] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:24.447 [2024-07-26 12:14:12.276531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-07-26 12:14:12.276592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:24.447 [2024-07-26 12:14:12.276608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:24.447 [2024-07-26 12:14:12.276621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-07-26 12:14:12.279702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-07-26 12:14:12.279746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:24.447 [2024-07-26 12:14:12.279759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.064 ms 00:20:24.447 [2024-07-26 12:14:12.279772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-07-26 12:14:12.279868] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:24.447 [2024-07-26 12:14:12.280938] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:24.447 [2024-07-26 12:14:12.280969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-07-26 12:14:12.280984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:24.447 [2024-07-26 12:14:12.280995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:20:24.447 [2024-07-26 12:14:12.281010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-07-26 12:14:12.282497] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:24.447 [2024-07-26 12:14:12.301435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-07-26 12:14:12.301475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:24.447 [2024-07-26 12:14:12.301492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.964 ms 00:20:24.447 [2024-07-26 12:14:12.301502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-07-26 12:14:12.301603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-07-26 12:14:12.301616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:24.447 [2024-07-26 12:14:12.301637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:24.447 [2024-07-26 12:14:12.301648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-07-26 12:14:12.308320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-07-26 12:14:12.308352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:24.447 [2024-07-26 12:14:12.308371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.629 ms 00:20:24.447 [2024-07-26 12:14:12.308382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-07-26 12:14:12.308513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-07-26 12:14:12.308527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:24.447 [2024-07-26 12:14:12.308540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:24.447 [2024-07-26 12:14:12.308554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-07-26 12:14:12.308588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-07-26 12:14:12.308599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:24.447 [2024-07-26 12:14:12.308612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:24.447 [2024-07-26 12:14:12.308622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-07-26 12:14:12.308652] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:24.447 [2024-07-26 12:14:12.313934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-07-26 12:14:12.313970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:24.447 [2024-07-26 12:14:12.313982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.300 ms 00:20:24.447 [2024-07-26 12:14:12.313995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-07-26 12:14:12.314066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-07-26 12:14:12.314084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:24.447 [2024-07-26 12:14:12.314097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:24.447 [2024-07-26 12:14:12.314110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-07-26 12:14:12.314152] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:24.447 [2024-07-26 12:14:12.314177] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:24.447 [2024-07-26 12:14:12.314219] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:24.447 [2024-07-26 12:14:12.314245] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:24.447 [2024-07-26 12:14:12.314327] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:24.447 [2024-07-26 12:14:12.314347] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:24.447 [2024-07-26 12:14:12.314360] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:24.447 [2024-07-26 12:14:12.314376] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:24.447 [2024-07-26 12:14:12.314388] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:24.447 [2024-07-26 12:14:12.314401] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:24.447 [2024-07-26 12:14:12.314411] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:24.447 [2024-07-26 12:14:12.314423] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:24.447 [2024-07-26 12:14:12.314432] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:24.447 [2024-07-26 12:14:12.314447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-07-26 12:14:12.314457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:24.447 [2024-07-26 12:14:12.314470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:20:24.447 [2024-07-26 12:14:12.314482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-07-26 12:14:12.314555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-07-26 12:14:12.314566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:24.447 [2024-07-26 12:14:12.314578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:24.447 [2024-07-26 12:14:12.314588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-07-26 12:14:12.314682] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:24.447 [2024-07-26 12:14:12.314695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:24.447 [2024-07-26 12:14:12.314707] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:24.447 [2024-07-26 12:14:12.314718] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.447 [2024-07-26 12:14:12.314734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:24.447 [2024-07-26 12:14:12.314744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:24.447 [2024-07-26 12:14:12.314755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:24.447 [2024-07-26 12:14:12.314765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:24.447 [2024-07-26 12:14:12.314779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:24.447 [2024-07-26 12:14:12.314788] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:24.447 [2024-07-26 12:14:12.314800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:24.447 [2024-07-26 12:14:12.314810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:24.448 [2024-07-26 12:14:12.314822] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:24.448 [2024-07-26 12:14:12.314832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:24.448 [2024-07-26 12:14:12.314843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:24.448 [2024-07-26 12:14:12.314852] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.448 [2024-07-26 12:14:12.314864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:24.448 [2024-07-26 12:14:12.314873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:24.448 [2024-07-26 12:14:12.314884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.448 [2024-07-26 12:14:12.314893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:24.448 [2024-07-26 12:14:12.314904] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:24.448 [2024-07-26 12:14:12.314913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.448 [2024-07-26 12:14:12.314924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:24.448 [2024-07-26 12:14:12.314934] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:24.448 [2024-07-26 12:14:12.314948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.448 [2024-07-26 12:14:12.314957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:24.448 [2024-07-26 12:14:12.314968] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:24.448 [2024-07-26 12:14:12.314986] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.448 [2024-07-26 12:14:12.315000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:24.448 [2024-07-26 12:14:12.315009] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:24.448 [2024-07-26 12:14:12.315021] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.448 [2024-07-26 12:14:12.315030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:24.448 [2024-07-26 12:14:12.315042] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:24.448 [2024-07-26 12:14:12.315051] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:24.448 [2024-07-26 12:14:12.315062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:24.448 [2024-07-26 12:14:12.315071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:24.448 [2024-07-26 12:14:12.315083] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:24.448 [2024-07-26 12:14:12.315092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:24.448 [2024-07-26 12:14:12.315103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:24.448 [2024-07-26 12:14:12.315112] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.448 [2024-07-26 12:14:12.315137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:24.448 [2024-07-26 12:14:12.315147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:24.448 [2024-07-26 12:14:12.315158] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.448 [2024-07-26 12:14:12.315169] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:24.448 [2024-07-26 12:14:12.315182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:24.448 [2024-07-26 12:14:12.315192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:24.448 [2024-07-26 12:14:12.315204] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.448 [2024-07-26 12:14:12.315214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:24.448 [2024-07-26 12:14:12.315226] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:24.448 [2024-07-26 12:14:12.315235] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:24.448 [2024-07-26 12:14:12.315247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:24.448 [2024-07-26 12:14:12.315256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:24.448 [2024-07-26 12:14:12.315268] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:24.448 [2024-07-26 12:14:12.315278] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:24.448 [2024-07-26 12:14:12.315292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:24.448 [2024-07-26 12:14:12.315304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:24.448 [2024-07-26 12:14:12.315321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:24.448 [2024-07-26 12:14:12.315331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:24.448 [2024-07-26 12:14:12.315343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:24.448 [2024-07-26 12:14:12.315354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:24.448 [2024-07-26 12:14:12.315366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:24.448 [2024-07-26 12:14:12.315376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:24.448 [2024-07-26 12:14:12.315389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:24.448 [2024-07-26 12:14:12.315399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:24.448 [2024-07-26 12:14:12.315411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:24.448 [2024-07-26 12:14:12.315421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:24.448 [2024-07-26 12:14:12.315434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:24.448 [2024-07-26 12:14:12.315444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:24.448 [2024-07-26 12:14:12.315457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:24.448 [2024-07-26 12:14:12.315467] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:24.448 [2024-07-26 12:14:12.315480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:24.448 [2024-07-26 12:14:12.315491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:24.448 [2024-07-26 12:14:12.315507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:24.448 [2024-07-26 12:14:12.315517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:24.448 [2024-07-26 12:14:12.315529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:24.448 [2024-07-26 12:14:12.315541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.448 [2024-07-26 12:14:12.315554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:24.448 [2024-07-26 12:14:12.315564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:20:24.448 [2024-07-26 12:14:12.315579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.448 [2024-07-26 12:14:12.357065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.448 [2024-07-26 12:14:12.357138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:24.448 [2024-07-26 12:14:12.357158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.492 ms 00:20:24.448 [2024-07-26 12:14:12.357171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.448 [2024-07-26 12:14:12.357332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.448 [2024-07-26 12:14:12.357348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:24.448 [2024-07-26 12:14:12.357360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:24.448 [2024-07-26 12:14:12.357373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.448 [2024-07-26 12:14:12.404879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.448 [2024-07-26 12:14:12.404940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:24.448 [2024-07-26 12:14:12.404955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.559 ms 00:20:24.448 [2024-07-26 12:14:12.404968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.448 [2024-07-26 12:14:12.405096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.448 [2024-07-26 12:14:12.405112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:24.448 [2024-07-26 12:14:12.405138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:24.448 [2024-07-26 12:14:12.405151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.448 [2024-07-26 12:14:12.405579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.448 [2024-07-26 12:14:12.405600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:24.448 [2024-07-26 12:14:12.405611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:20:24.448 [2024-07-26 12:14:12.405634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.448 [2024-07-26 12:14:12.405752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.448 [2024-07-26 12:14:12.405774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:24.448 [2024-07-26 12:14:12.405785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:24.448 [2024-07-26 12:14:12.405797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.708 [2024-07-26 12:14:12.427298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.708 [2024-07-26 12:14:12.427353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:24.708 [2024-07-26 12:14:12.427367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.511 ms 00:20:24.708 [2024-07-26 12:14:12.427380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.708 [2024-07-26 12:14:12.446753] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:24.709 [2024-07-26 12:14:12.446802] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:24.709 [2024-07-26 12:14:12.446821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.709 [2024-07-26 12:14:12.446835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:24.709 [2024-07-26 12:14:12.446847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.331 ms 00:20:24.709 [2024-07-26 12:14:12.446860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.709 [2024-07-26 12:14:12.476438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.709 [2024-07-26 12:14:12.476501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:24.709 [2024-07-26 12:14:12.476516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.539 ms 00:20:24.709 [2024-07-26 12:14:12.476602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.709 [2024-07-26 12:14:12.495308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.709 [2024-07-26 12:14:12.495360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:24.709 [2024-07-26 12:14:12.495385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.625 ms 00:20:24.709 [2024-07-26 12:14:12.495402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.709 [2024-07-26 12:14:12.513904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.709 [2024-07-26 12:14:12.513951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:24.709 [2024-07-26 12:14:12.513966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.448 ms 00:20:24.709 [2024-07-26 12:14:12.513978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.709 [2024-07-26 12:14:12.514757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.709 [2024-07-26 12:14:12.514791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:24.709 [2024-07-26 12:14:12.514803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:20:24.709 [2024-07-26 12:14:12.514815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.709 [2024-07-26 12:14:12.607605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.709 [2024-07-26 12:14:12.607676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:24.709 [2024-07-26 12:14:12.607693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.913 ms 00:20:24.709 [2024-07-26 12:14:12.607706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.709 [2024-07-26 12:14:12.620009] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:24.709 [2024-07-26 12:14:12.636270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.709 [2024-07-26 12:14:12.636328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:24.709 [2024-07-26 12:14:12.636348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.482 ms 00:20:24.709 [2024-07-26 12:14:12.636359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.709 [2024-07-26 12:14:12.636490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.709 [2024-07-26 12:14:12.636503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:24.709 [2024-07-26 12:14:12.636517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:24.709 [2024-07-26 12:14:12.636528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.709 [2024-07-26 12:14:12.636582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.709 [2024-07-26 12:14:12.636593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:24.709 [2024-07-26 12:14:12.636609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:24.709 [2024-07-26 12:14:12.636620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.709 [2024-07-26 12:14:12.636647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.709 [2024-07-26 12:14:12.636657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:24.709 [2024-07-26 12:14:12.636670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:24.709 [2024-07-26 12:14:12.636679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.709 [2024-07-26 12:14:12.636717] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:24.709 [2024-07-26 12:14:12.636729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.709 [2024-07-26 12:14:12.636744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:24.709 [2024-07-26 12:14:12.636754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:24.709 [2024-07-26 12:14:12.636769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.709 [2024-07-26 12:14:12.673365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.709 [2024-07-26 12:14:12.673418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:24.709 [2024-07-26 12:14:12.673434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.631 ms 00:20:24.709 [2024-07-26 12:14:12.673447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.709 [2024-07-26 12:14:12.673561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.709 [2024-07-26 12:14:12.673581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:24.709 [2024-07-26 12:14:12.673595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:24.709 [2024-07-26 12:14:12.673608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.709 [2024-07-26 12:14:12.674862] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:24.709 [2024-07-26 12:14:12.680176] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 398.685 ms, result 0 00:20:24.709 [2024-07-26 12:14:12.681285] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:24.968 Some configs were skipped because the RPC state that can call them passed over. 00:20:24.968 12:14:12 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:24.968 [2024-07-26 12:14:12.908452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.968 [2024-07-26 12:14:12.908510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:24.968 [2024-07-26 12:14:12.908532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:20:24.969 [2024-07-26 12:14:12.908542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.969 [2024-07-26 12:14:12.908585] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.726 ms, result 0 00:20:24.969 true 00:20:24.969 12:14:12 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:25.227 [2024-07-26 12:14:13.095545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.227 [2024-07-26 12:14:13.095609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:25.227 [2024-07-26 12:14:13.095626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.890 ms 00:20:25.227 [2024-07-26 12:14:13.095638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.227 [2024-07-26 12:14:13.095677] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.025 ms, result 0 00:20:25.227 true 00:20:25.227 12:14:13 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79731 00:20:25.227 12:14:13 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79731 ']' 00:20:25.227 12:14:13 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79731 00:20:25.227 12:14:13 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:20:25.228 12:14:13 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:25.228 12:14:13 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79731 00:20:25.228 killing process with pid 79731 00:20:25.228 12:14:13 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:25.228 12:14:13 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:25.228 12:14:13 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79731' 00:20:25.228 12:14:13 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79731 00:20:25.228 12:14:13 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79731 00:20:26.622 [2024-07-26 12:14:14.260568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.622 [2024-07-26 12:14:14.260635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:26.622 [2024-07-26 12:14:14.260653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:26.622 [2024-07-26 12:14:14.260666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.622 [2024-07-26 12:14:14.260693] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:26.622 [2024-07-26 12:14:14.264489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.622 [2024-07-26 12:14:14.264525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:26.622 [2024-07-26 12:14:14.264538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.786 ms 00:20:26.622 [2024-07-26 12:14:14.264554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.622 [2024-07-26 12:14:14.264816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.622 [2024-07-26 12:14:14.264837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:26.622 [2024-07-26 12:14:14.264849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:20:26.622 [2024-07-26 12:14:14.264861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.622 [2024-07-26 12:14:14.268267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.622 [2024-07-26 12:14:14.268310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:26.623 [2024-07-26 12:14:14.268323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.393 ms 00:20:26.623 [2024-07-26 12:14:14.268336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.623 [2024-07-26 12:14:14.274027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.623 [2024-07-26 12:14:14.274067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:26.623 [2024-07-26 12:14:14.274080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.664 ms 00:20:26.623 [2024-07-26 12:14:14.274094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.623 [2024-07-26 12:14:14.289988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.623 [2024-07-26 12:14:14.290033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:26.623 [2024-07-26 12:14:14.290047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.855 ms 00:20:26.623 [2024-07-26 12:14:14.290063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.623 [2024-07-26 12:14:14.300556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.623 [2024-07-26 12:14:14.300606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:26.623 [2024-07-26 12:14:14.300620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.451 ms 00:20:26.623 [2024-07-26 12:14:14.300632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.623 [2024-07-26 12:14:14.300783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.623 [2024-07-26 12:14:14.300799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:26.623 [2024-07-26 12:14:14.300811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:26.623 [2024-07-26 12:14:14.300836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.623 [2024-07-26 12:14:14.316743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.623 [2024-07-26 12:14:14.316786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:26.623 [2024-07-26 12:14:14.316800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.914 ms 00:20:26.623 [2024-07-26 12:14:14.316812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.623 [2024-07-26 12:14:14.331972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.623 [2024-07-26 12:14:14.332016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:26.623 [2024-07-26 12:14:14.332029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.136 ms 00:20:26.623 [2024-07-26 12:14:14.332047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.623 [2024-07-26 12:14:14.346607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.623 [2024-07-26 12:14:14.346646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:26.623 [2024-07-26 12:14:14.346660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.543 ms 00:20:26.623 [2024-07-26 12:14:14.346671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.623 [2024-07-26 12:14:14.361419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.623 [2024-07-26 12:14:14.361459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:26.623 [2024-07-26 12:14:14.361472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.706 ms 00:20:26.623 [2024-07-26 12:14:14.361484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.623 [2024-07-26 12:14:14.361521] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:26.623 [2024-07-26 12:14:14.361542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.361999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:26.623 [2024-07-26 12:14:14.362295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:26.624 [2024-07-26 12:14:14.362792] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:26.624 [2024-07-26 12:14:14.362802] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc106026-f60a-43f0-978e-e006bfb6b3f6 00:20:26.624 [2024-07-26 12:14:14.362818] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:26.624 [2024-07-26 12:14:14.362829] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:26.624 [2024-07-26 12:14:14.362841] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:26.624 [2024-07-26 12:14:14.362853] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:26.624 [2024-07-26 12:14:14.362865] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:26.624 [2024-07-26 12:14:14.362875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:26.624 [2024-07-26 12:14:14.362887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:26.624 [2024-07-26 12:14:14.362897] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:26.624 [2024-07-26 12:14:14.362921] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:26.624 [2024-07-26 12:14:14.362930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.624 [2024-07-26 12:14:14.362943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:26.624 [2024-07-26 12:14:14.362954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.413 ms 00:20:26.624 [2024-07-26 12:14:14.362969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.624 [2024-07-26 12:14:14.382910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.624 [2024-07-26 12:14:14.382954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:26.624 [2024-07-26 12:14:14.382967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.941 ms 00:20:26.624 [2024-07-26 12:14:14.382983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.624 [2024-07-26 12:14:14.383554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.624 [2024-07-26 12:14:14.383578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:26.624 [2024-07-26 12:14:14.383593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:20:26.624 [2024-07-26 12:14:14.383605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.624 [2024-07-26 12:14:14.448285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.624 [2024-07-26 12:14:14.448342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:26.624 [2024-07-26 12:14:14.448356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.624 [2024-07-26 12:14:14.448369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.624 [2024-07-26 12:14:14.448483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.624 [2024-07-26 12:14:14.448498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:26.624 [2024-07-26 12:14:14.448511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.624 [2024-07-26 12:14:14.448523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.624 [2024-07-26 12:14:14.448575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.624 [2024-07-26 12:14:14.448591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:26.624 [2024-07-26 12:14:14.448602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.624 [2024-07-26 12:14:14.448617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.624 [2024-07-26 12:14:14.448636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.624 [2024-07-26 12:14:14.448649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:26.624 [2024-07-26 12:14:14.448660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.624 [2024-07-26 12:14:14.448675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.624 [2024-07-26 12:14:14.565618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.624 [2024-07-26 12:14:14.565700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:26.624 [2024-07-26 12:14:14.565717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.624 [2024-07-26 12:14:14.565731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.884 [2024-07-26 12:14:14.670545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.884 [2024-07-26 12:14:14.670631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:26.884 [2024-07-26 12:14:14.670650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.884 [2024-07-26 12:14:14.670663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.884 [2024-07-26 12:14:14.670778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.884 [2024-07-26 12:14:14.670793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:26.884 [2024-07-26 12:14:14.670804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.884 [2024-07-26 12:14:14.670821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.885 [2024-07-26 12:14:14.670851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.885 [2024-07-26 12:14:14.670864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:26.885 [2024-07-26 12:14:14.670875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.885 [2024-07-26 12:14:14.670887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.885 [2024-07-26 12:14:14.671007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.885 [2024-07-26 12:14:14.671023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:26.885 [2024-07-26 12:14:14.671034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.885 [2024-07-26 12:14:14.671046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.885 [2024-07-26 12:14:14.671082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.885 [2024-07-26 12:14:14.671097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:26.885 [2024-07-26 12:14:14.671108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.885 [2024-07-26 12:14:14.671148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.885 [2024-07-26 12:14:14.671193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.885 [2024-07-26 12:14:14.671207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:26.885 [2024-07-26 12:14:14.671217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.885 [2024-07-26 12:14:14.671232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.885 [2024-07-26 12:14:14.671297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.885 [2024-07-26 12:14:14.671312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:26.885 [2024-07-26 12:14:14.671322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.885 [2024-07-26 12:14:14.671333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.885 [2024-07-26 12:14:14.671468] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 411.550 ms, result 0 00:20:27.822 12:14:15 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:27.822 12:14:15 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:28.081 [2024-07-26 12:14:15.823501] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:20:28.081 [2024-07-26 12:14:15.823632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79800 ] 00:20:28.081 [2024-07-26 12:14:15.991371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.340 [2024-07-26 12:14:16.219830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.910 [2024-07-26 12:14:16.606098] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:28.910 [2024-07-26 12:14:16.606190] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:28.910 [2024-07-26 12:14:16.767275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.910 [2024-07-26 12:14:16.767327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:28.910 [2024-07-26 12:14:16.767342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:28.910 [2024-07-26 12:14:16.767353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.910 [2024-07-26 12:14:16.770482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.910 [2024-07-26 12:14:16.770522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:28.910 [2024-07-26 12:14:16.770535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.114 ms 00:20:28.911 [2024-07-26 12:14:16.770545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.911 [2024-07-26 12:14:16.770640] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:28.911 [2024-07-26 12:14:16.771761] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:28.911 [2024-07-26 12:14:16.771788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.911 [2024-07-26 12:14:16.771799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:28.911 [2024-07-26 12:14:16.771809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:20:28.911 [2024-07-26 12:14:16.771819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.911 [2024-07-26 12:14:16.773373] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:28.911 [2024-07-26 12:14:16.793404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.911 [2024-07-26 12:14:16.793442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:28.911 [2024-07-26 12:14:16.793461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.064 ms 00:20:28.911 [2024-07-26 12:14:16.793487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.911 [2024-07-26 12:14:16.793586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.911 [2024-07-26 12:14:16.793600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:28.911 [2024-07-26 12:14:16.793611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:28.911 [2024-07-26 12:14:16.793628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.911 [2024-07-26 12:14:16.800253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.911 [2024-07-26 12:14:16.800282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:28.911 [2024-07-26 12:14:16.800293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.596 ms 00:20:28.911 [2024-07-26 12:14:16.800303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.911 [2024-07-26 12:14:16.800396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.911 [2024-07-26 12:14:16.800410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:28.911 [2024-07-26 12:14:16.800421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:28.911 [2024-07-26 12:14:16.800431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.911 [2024-07-26 12:14:16.800462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.911 [2024-07-26 12:14:16.800473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:28.911 [2024-07-26 12:14:16.800486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:28.911 [2024-07-26 12:14:16.800496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.911 [2024-07-26 12:14:16.800519] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:28.911 [2024-07-26 12:14:16.806055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.911 [2024-07-26 12:14:16.806088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:28.911 [2024-07-26 12:14:16.806100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.551 ms 00:20:28.911 [2024-07-26 12:14:16.806110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.911 [2024-07-26 12:14:16.806191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.911 [2024-07-26 12:14:16.806204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:28.911 [2024-07-26 12:14:16.806215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:28.911 [2024-07-26 12:14:16.806225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.911 [2024-07-26 12:14:16.806245] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:28.911 [2024-07-26 12:14:16.806268] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:28.911 [2024-07-26 12:14:16.806304] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:28.911 [2024-07-26 12:14:16.806322] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:28.911 [2024-07-26 12:14:16.806406] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:28.911 [2024-07-26 12:14:16.806420] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:28.911 [2024-07-26 12:14:16.806433] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:28.911 [2024-07-26 12:14:16.806446] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:28.911 [2024-07-26 12:14:16.806457] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:28.911 [2024-07-26 12:14:16.806472] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:28.911 [2024-07-26 12:14:16.806482] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:28.911 [2024-07-26 12:14:16.806491] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:28.911 [2024-07-26 12:14:16.806501] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:28.911 [2024-07-26 12:14:16.806511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.911 [2024-07-26 12:14:16.806522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:28.911 [2024-07-26 12:14:16.806532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:20:28.911 [2024-07-26 12:14:16.806542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.911 [2024-07-26 12:14:16.806613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.911 [2024-07-26 12:14:16.806624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:28.911 [2024-07-26 12:14:16.806639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:28.911 [2024-07-26 12:14:16.806648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.911 [2024-07-26 12:14:16.806731] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:28.911 [2024-07-26 12:14:16.806748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:28.911 [2024-07-26 12:14:16.806759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:28.911 [2024-07-26 12:14:16.806769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.911 [2024-07-26 12:14:16.806779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:28.911 [2024-07-26 12:14:16.806788] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:28.911 [2024-07-26 12:14:16.806798] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:28.911 [2024-07-26 12:14:16.806807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:28.911 [2024-07-26 12:14:16.806817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:28.911 [2024-07-26 12:14:16.806826] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:28.911 [2024-07-26 12:14:16.806835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:28.911 [2024-07-26 12:14:16.806845] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:28.911 [2024-07-26 12:14:16.806854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:28.911 [2024-07-26 12:14:16.806863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:28.911 [2024-07-26 12:14:16.806872] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:28.911 [2024-07-26 12:14:16.806881] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.911 [2024-07-26 12:14:16.806890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:28.911 [2024-07-26 12:14:16.806899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:28.911 [2024-07-26 12:14:16.806919] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.911 [2024-07-26 12:14:16.806928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:28.911 [2024-07-26 12:14:16.806937] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:28.911 [2024-07-26 12:14:16.806946] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.911 [2024-07-26 12:14:16.806955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:28.911 [2024-07-26 12:14:16.806965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:28.911 [2024-07-26 12:14:16.806974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.911 [2024-07-26 12:14:16.806983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:28.911 [2024-07-26 12:14:16.806992] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:28.911 [2024-07-26 12:14:16.807001] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.911 [2024-07-26 12:14:16.807010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:28.911 [2024-07-26 12:14:16.807019] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:28.911 [2024-07-26 12:14:16.807027] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:28.911 [2024-07-26 12:14:16.807036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:28.911 [2024-07-26 12:14:16.807046] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:28.911 [2024-07-26 12:14:16.807054] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:28.911 [2024-07-26 12:14:16.807063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:28.911 [2024-07-26 12:14:16.807073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:28.911 [2024-07-26 12:14:16.807082] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:28.911 [2024-07-26 12:14:16.807092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:28.911 [2024-07-26 12:14:16.807101] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:28.911 [2024-07-26 12:14:16.807110] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.911 [2024-07-26 12:14:16.807128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:28.911 [2024-07-26 12:14:16.807138] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:28.911 [2024-07-26 12:14:16.807147] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.911 [2024-07-26 12:14:16.807157] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:28.911 [2024-07-26 12:14:16.807166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:28.911 [2024-07-26 12:14:16.807176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:28.912 [2024-07-26 12:14:16.807186] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:28.912 [2024-07-26 12:14:16.807200] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:28.912 [2024-07-26 12:14:16.807209] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:28.912 [2024-07-26 12:14:16.807218] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:28.912 [2024-07-26 12:14:16.807228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:28.912 [2024-07-26 12:14:16.807237] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:28.912 [2024-07-26 12:14:16.807246] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:28.912 [2024-07-26 12:14:16.807256] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:28.912 [2024-07-26 12:14:16.807268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:28.912 [2024-07-26 12:14:16.807279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:28.912 [2024-07-26 12:14:16.807290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:28.912 [2024-07-26 12:14:16.807300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:28.912 [2024-07-26 12:14:16.807311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:28.912 [2024-07-26 12:14:16.807322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:28.912 [2024-07-26 12:14:16.807333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:28.912 [2024-07-26 12:14:16.807343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:28.912 [2024-07-26 12:14:16.807353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:28.912 [2024-07-26 12:14:16.807363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:28.912 [2024-07-26 12:14:16.807373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:28.912 [2024-07-26 12:14:16.807383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:28.912 [2024-07-26 12:14:16.807394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:28.912 [2024-07-26 12:14:16.807404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:28.912 [2024-07-26 12:14:16.807415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:28.912 [2024-07-26 12:14:16.807424] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:28.912 [2024-07-26 12:14:16.807436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:28.912 [2024-07-26 12:14:16.807446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:28.912 [2024-07-26 12:14:16.807457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:28.912 [2024-07-26 12:14:16.807467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:28.912 [2024-07-26 12:14:16.807477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:28.912 [2024-07-26 12:14:16.807488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.912 [2024-07-26 12:14:16.807498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:28.912 [2024-07-26 12:14:16.807508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:20:28.912 [2024-07-26 12:14:16.807518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.912 [2024-07-26 12:14:16.857540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.912 [2024-07-26 12:14:16.857691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:28.912 [2024-07-26 12:14:16.857791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.050 ms 00:20:28.912 [2024-07-26 12:14:16.857827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.912 [2024-07-26 12:14:16.857991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.912 [2024-07-26 12:14:16.858095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:28.912 [2024-07-26 12:14:16.858150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:28.912 [2024-07-26 12:14:16.858183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:16.910878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:16.911101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.172 [2024-07-26 12:14:16.911230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.685 ms 00:20:29.172 [2024-07-26 12:14:16.911278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:16.911411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:16.911483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.172 [2024-07-26 12:14:16.911558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:29.172 [2024-07-26 12:14:16.911588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:16.912045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:16.912187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.172 [2024-07-26 12:14:16.912267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:20:29.172 [2024-07-26 12:14:16.912301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:16.912458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:16.912499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.172 [2024-07-26 12:14:16.912566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:29.172 [2024-07-26 12:14:16.912600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:16.933382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:16.933517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.172 [2024-07-26 12:14:16.933538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.767 ms 00:20:29.172 [2024-07-26 12:14:16.933548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:16.954019] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:29.172 [2024-07-26 12:14:16.954058] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:29.172 [2024-07-26 12:14:16.954073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:16.954085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:29.172 [2024-07-26 12:14:16.954096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.422 ms 00:20:29.172 [2024-07-26 12:14:16.954106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:16.983895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:16.983933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:29.172 [2024-07-26 12:14:16.983947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.744 ms 00:20:29.172 [2024-07-26 12:14:16.983958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:17.003303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:17.003346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:29.172 [2024-07-26 12:14:17.003360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.291 ms 00:20:29.172 [2024-07-26 12:14:17.003370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:17.022558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:17.022594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:29.172 [2024-07-26 12:14:17.022607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.137 ms 00:20:29.172 [2024-07-26 12:14:17.022617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:17.023466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:17.023495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:29.172 [2024-07-26 12:14:17.023507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:20:29.172 [2024-07-26 12:14:17.023517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:17.112339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:17.112404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:29.172 [2024-07-26 12:14:17.112421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.937 ms 00:20:29.172 [2024-07-26 12:14:17.112431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:17.124844] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:29.172 [2024-07-26 12:14:17.141279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:17.141342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:29.172 [2024-07-26 12:14:17.141357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.737 ms 00:20:29.172 [2024-07-26 12:14:17.141384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:17.141506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:17.141520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:29.172 [2024-07-26 12:14:17.141531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:29.172 [2024-07-26 12:14:17.141541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:17.141596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:17.141607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:29.172 [2024-07-26 12:14:17.141617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:29.172 [2024-07-26 12:14:17.141636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:17.141659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:17.141674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:29.172 [2024-07-26 12:14:17.141684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:29.172 [2024-07-26 12:14:17.141694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-07-26 12:14:17.141729] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:29.172 [2024-07-26 12:14:17.141741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.172 [2024-07-26 12:14:17.141751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:29.172 [2024-07-26 12:14:17.141761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:29.172 [2024-07-26 12:14:17.141771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.431 [2024-07-26 12:14:17.181611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.431 [2024-07-26 12:14:17.181684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:29.431 [2024-07-26 12:14:17.181700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.880 ms 00:20:29.431 [2024-07-26 12:14:17.181711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.431 [2024-07-26 12:14:17.181862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.431 [2024-07-26 12:14:17.181875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:29.431 [2024-07-26 12:14:17.181887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:29.431 [2024-07-26 12:14:17.181896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.431 [2024-07-26 12:14:17.182931] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:29.431 [2024-07-26 12:14:17.188813] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 416.035 ms, result 0 00:20:29.431 [2024-07-26 12:14:17.189770] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:29.431 [2024-07-26 12:14:17.209297] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:38.379  Copying: 31/256 [MB] (31 MBps) Copying: 59/256 [MB] (28 MBps) Copying: 87/256 [MB] (28 MBps) Copying: 115/256 [MB] (27 MBps) Copying: 143/256 [MB] (27 MBps) Copying: 172/256 [MB] (28 MBps) Copying: 200/256 [MB] (28 MBps) Copying: 227/256 [MB] (26 MBps) Copying: 255/256 [MB] (28 MBps) Copying: 256/256 [MB] (average 28 MBps)[2024-07-26 12:14:26.209661] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:38.379 [2024-07-26 12:14:26.224714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-07-26 12:14:26.224893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:38.379 [2024-07-26 12:14:26.224977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:38.379 [2024-07-26 12:14:26.225014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-07-26 12:14:26.225075] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:38.379 [2024-07-26 12:14:26.228743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-07-26 12:14:26.228897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:38.379 [2024-07-26 12:14:26.228972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.598 ms 00:20:38.379 [2024-07-26 12:14:26.229007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-07-26 12:14:26.229264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.379 [2024-07-26 12:14:26.229304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:38.379 [2024-07-26 12:14:26.229335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:20:38.379 [2024-07-26 12:14:26.229415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.379 [2024-07-26 12:14:26.232301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.380 [2024-07-26 12:14:26.232410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:38.380 [2024-07-26 12:14:26.232494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.846 ms 00:20:38.380 [2024-07-26 12:14:26.232528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.380 [2024-07-26 12:14:26.238295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.380 [2024-07-26 12:14:26.238437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:38.380 [2024-07-26 12:14:26.238512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.715 ms 00:20:38.380 [2024-07-26 12:14:26.238546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.380 [2024-07-26 12:14:26.277729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.380 [2024-07-26 12:14:26.277957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:38.380 [2024-07-26 12:14:26.278032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.156 ms 00:20:38.380 [2024-07-26 12:14:26.278068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.380 [2024-07-26 12:14:26.299768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.380 [2024-07-26 12:14:26.299917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:38.380 [2024-07-26 12:14:26.299991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.643 ms 00:20:38.380 [2024-07-26 12:14:26.300032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.380 [2024-07-26 12:14:26.300204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.380 [2024-07-26 12:14:26.300295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:38.380 [2024-07-26 12:14:26.300350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:38.380 [2024-07-26 12:14:26.300380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.380 [2024-07-26 12:14:26.340223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.380 [2024-07-26 12:14:26.340455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:38.380 [2024-07-26 12:14:26.340478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.867 ms 00:20:38.380 [2024-07-26 12:14:26.340489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.639 [2024-07-26 12:14:26.381000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.639 [2024-07-26 12:14:26.381081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:38.639 [2024-07-26 12:14:26.381097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.470 ms 00:20:38.639 [2024-07-26 12:14:26.381108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.639 [2024-07-26 12:14:26.418257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.639 [2024-07-26 12:14:26.418325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:38.639 [2024-07-26 12:14:26.418341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.110 ms 00:20:38.639 [2024-07-26 12:14:26.418352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.639 [2024-07-26 12:14:26.455882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.639 [2024-07-26 12:14:26.455977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:38.639 [2024-07-26 12:14:26.456013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.411 ms 00:20:38.639 [2024-07-26 12:14:26.456029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.639 [2024-07-26 12:14:26.456191] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:38.639 [2024-07-26 12:14:26.456235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:38.639 [2024-07-26 12:14:26.456834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.456849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.456865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.456881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.456896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.456912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.456926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.456942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.456956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.456972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.456987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:38.640 [2024-07-26 12:14:26.457761] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:38.640 [2024-07-26 12:14:26.457778] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc106026-f60a-43f0-978e-e006bfb6b3f6 00:20:38.640 [2024-07-26 12:14:26.457792] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:38.640 [2024-07-26 12:14:26.457805] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:38.640 [2024-07-26 12:14:26.457835] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:38.640 [2024-07-26 12:14:26.457850] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:38.640 [2024-07-26 12:14:26.457863] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:38.640 [2024-07-26 12:14:26.457876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:38.640 [2024-07-26 12:14:26.457890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:38.640 [2024-07-26 12:14:26.457902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:38.640 [2024-07-26 12:14:26.457915] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:38.640 [2024-07-26 12:14:26.457931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.640 [2024-07-26 12:14:26.457946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:38.640 [2024-07-26 12:14:26.457969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.745 ms 00:20:38.640 [2024-07-26 12:14:26.457983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.640 [2024-07-26 12:14:26.476576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.640 [2024-07-26 12:14:26.476641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:38.640 [2024-07-26 12:14:26.476661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.576 ms 00:20:38.640 [2024-07-26 12:14:26.476691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.640 [2024-07-26 12:14:26.477249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:38.640 [2024-07-26 12:14:26.477289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:38.640 [2024-07-26 12:14:26.477306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:20:38.640 [2024-07-26 12:14:26.477320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.640 [2024-07-26 12:14:26.521023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.640 [2024-07-26 12:14:26.521110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:38.640 [2024-07-26 12:14:26.521161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.640 [2024-07-26 12:14:26.521177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.640 [2024-07-26 12:14:26.521363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.640 [2024-07-26 12:14:26.521395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:38.640 [2024-07-26 12:14:26.521411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.640 [2024-07-26 12:14:26.521427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.640 [2024-07-26 12:14:26.521504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.640 [2024-07-26 12:14:26.521522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:38.640 [2024-07-26 12:14:26.521538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.640 [2024-07-26 12:14:26.521553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.640 [2024-07-26 12:14:26.521581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.641 [2024-07-26 12:14:26.521595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:38.641 [2024-07-26 12:14:26.521615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.641 [2024-07-26 12:14:26.521641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.900 [2024-07-26 12:14:26.640156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.900 [2024-07-26 12:14:26.640217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:38.900 [2024-07-26 12:14:26.640234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.900 [2024-07-26 12:14:26.640244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.900 [2024-07-26 12:14:26.748894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.900 [2024-07-26 12:14:26.748964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:38.900 [2024-07-26 12:14:26.748980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.900 [2024-07-26 12:14:26.748990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.900 [2024-07-26 12:14:26.749082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.900 [2024-07-26 12:14:26.749094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:38.900 [2024-07-26 12:14:26.749105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.900 [2024-07-26 12:14:26.749115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.900 [2024-07-26 12:14:26.749169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.900 [2024-07-26 12:14:26.749179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:38.900 [2024-07-26 12:14:26.749190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.900 [2024-07-26 12:14:26.749204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.900 [2024-07-26 12:14:26.749310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.900 [2024-07-26 12:14:26.749323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:38.900 [2024-07-26 12:14:26.749334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.900 [2024-07-26 12:14:26.749343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.900 [2024-07-26 12:14:26.749379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.900 [2024-07-26 12:14:26.749391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:38.900 [2024-07-26 12:14:26.749400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.900 [2024-07-26 12:14:26.749410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.900 [2024-07-26 12:14:26.749453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.900 [2024-07-26 12:14:26.749464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:38.900 [2024-07-26 12:14:26.749475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.900 [2024-07-26 12:14:26.749484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.900 [2024-07-26 12:14:26.749528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:38.900 [2024-07-26 12:14:26.749539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:38.900 [2024-07-26 12:14:26.749548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:38.900 [2024-07-26 12:14:26.749561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:38.900 [2024-07-26 12:14:26.749710] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 525.839 ms, result 0 00:20:40.276 00:20:40.276 00:20:40.276 12:14:27 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:40.276 12:14:27 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:40.535 12:14:28 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:40.794 [2024-07-26 12:14:28.520418] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:20:40.794 [2024-07-26 12:14:28.520539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79937 ] 00:20:40.794 [2024-07-26 12:14:28.688075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.053 [2024-07-26 12:14:28.918086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.621 [2024-07-26 12:14:29.310496] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:41.621 [2024-07-26 12:14:29.310569] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:41.621 [2024-07-26 12:14:29.472724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.621 [2024-07-26 12:14:29.472783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:41.621 [2024-07-26 12:14:29.472799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:41.621 [2024-07-26 12:14:29.472809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.621 [2024-07-26 12:14:29.475945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.621 [2024-07-26 12:14:29.475987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:41.621 [2024-07-26 12:14:29.476000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.119 ms 00:20:41.621 [2024-07-26 12:14:29.476010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.621 [2024-07-26 12:14:29.476106] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:41.621 [2024-07-26 12:14:29.477264] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:41.621 [2024-07-26 12:14:29.477298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.621 [2024-07-26 12:14:29.477309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:41.621 [2024-07-26 12:14:29.477320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.203 ms 00:20:41.621 [2024-07-26 12:14:29.477330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.621 [2024-07-26 12:14:29.478822] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:41.621 [2024-07-26 12:14:29.499643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.621 [2024-07-26 12:14:29.499699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:41.621 [2024-07-26 12:14:29.499719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.856 ms 00:20:41.621 [2024-07-26 12:14:29.499730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.621 [2024-07-26 12:14:29.499830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.621 [2024-07-26 12:14:29.499843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:41.621 [2024-07-26 12:14:29.499854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:41.621 [2024-07-26 12:14:29.499864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.621 [2024-07-26 12:14:29.506666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.621 [2024-07-26 12:14:29.506695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:41.621 [2024-07-26 12:14:29.506707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.772 ms 00:20:41.621 [2024-07-26 12:14:29.506717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.621 [2024-07-26 12:14:29.506815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.621 [2024-07-26 12:14:29.506830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:41.621 [2024-07-26 12:14:29.506841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:41.621 [2024-07-26 12:14:29.506852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.621 [2024-07-26 12:14:29.506884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.621 [2024-07-26 12:14:29.506895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:41.622 [2024-07-26 12:14:29.506909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:41.622 [2024-07-26 12:14:29.506919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.622 [2024-07-26 12:14:29.506943] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:41.622 [2024-07-26 12:14:29.512867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.622 [2024-07-26 12:14:29.512901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:41.622 [2024-07-26 12:14:29.512913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.939 ms 00:20:41.622 [2024-07-26 12:14:29.512923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.622 [2024-07-26 12:14:29.513004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.622 [2024-07-26 12:14:29.513016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:41.622 [2024-07-26 12:14:29.513028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:41.622 [2024-07-26 12:14:29.513037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.622 [2024-07-26 12:14:29.513062] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:41.622 [2024-07-26 12:14:29.513085] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:41.622 [2024-07-26 12:14:29.513138] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:41.622 [2024-07-26 12:14:29.513157] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:41.622 [2024-07-26 12:14:29.513256] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:41.622 [2024-07-26 12:14:29.513270] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:41.622 [2024-07-26 12:14:29.513283] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:41.622 [2024-07-26 12:14:29.513297] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:41.622 [2024-07-26 12:14:29.513308] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:41.622 [2024-07-26 12:14:29.513323] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:41.622 [2024-07-26 12:14:29.513332] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:41.622 [2024-07-26 12:14:29.513342] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:41.622 [2024-07-26 12:14:29.513351] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:41.622 [2024-07-26 12:14:29.513362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.622 [2024-07-26 12:14:29.513371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:41.622 [2024-07-26 12:14:29.513382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:20:41.622 [2024-07-26 12:14:29.513391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.622 [2024-07-26 12:14:29.513464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.622 [2024-07-26 12:14:29.513475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:41.622 [2024-07-26 12:14:29.513488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:41.622 [2024-07-26 12:14:29.513498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.622 [2024-07-26 12:14:29.513584] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:41.622 [2024-07-26 12:14:29.513596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:41.622 [2024-07-26 12:14:29.513607] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:41.622 [2024-07-26 12:14:29.513617] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.622 [2024-07-26 12:14:29.513638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:41.622 [2024-07-26 12:14:29.513647] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:41.622 [2024-07-26 12:14:29.513657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:41.622 [2024-07-26 12:14:29.513667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:41.622 [2024-07-26 12:14:29.513676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:41.622 [2024-07-26 12:14:29.513685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:41.622 [2024-07-26 12:14:29.513695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:41.622 [2024-07-26 12:14:29.513704] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:41.622 [2024-07-26 12:14:29.513713] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:41.622 [2024-07-26 12:14:29.513722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:41.622 [2024-07-26 12:14:29.513732] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:41.622 [2024-07-26 12:14:29.513741] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.622 [2024-07-26 12:14:29.513750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:41.622 [2024-07-26 12:14:29.513759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:41.622 [2024-07-26 12:14:29.513780] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.622 [2024-07-26 12:14:29.513789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:41.622 [2024-07-26 12:14:29.513798] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:41.622 [2024-07-26 12:14:29.513807] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.622 [2024-07-26 12:14:29.513816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:41.622 [2024-07-26 12:14:29.513826] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:41.622 [2024-07-26 12:14:29.513835] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.622 [2024-07-26 12:14:29.513844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:41.622 [2024-07-26 12:14:29.513853] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:41.622 [2024-07-26 12:14:29.513862] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.622 [2024-07-26 12:14:29.513870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:41.622 [2024-07-26 12:14:29.513879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:41.622 [2024-07-26 12:14:29.513888] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:41.622 [2024-07-26 12:14:29.513897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:41.622 [2024-07-26 12:14:29.513906] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:41.622 [2024-07-26 12:14:29.513915] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:41.622 [2024-07-26 12:14:29.513924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:41.622 [2024-07-26 12:14:29.513933] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:41.622 [2024-07-26 12:14:29.513942] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:41.622 [2024-07-26 12:14:29.513951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:41.622 [2024-07-26 12:14:29.513960] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:41.622 [2024-07-26 12:14:29.513969] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.622 [2024-07-26 12:14:29.513978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:41.622 [2024-07-26 12:14:29.513987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:41.622 [2024-07-26 12:14:29.513998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.622 [2024-07-26 12:14:29.514007] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:41.622 [2024-07-26 12:14:29.514017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:41.622 [2024-07-26 12:14:29.514027] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:41.622 [2024-07-26 12:14:29.514037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:41.622 [2024-07-26 12:14:29.514050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:41.622 [2024-07-26 12:14:29.514060] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:41.622 [2024-07-26 12:14:29.514069] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:41.622 [2024-07-26 12:14:29.514078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:41.622 [2024-07-26 12:14:29.514087] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:41.622 [2024-07-26 12:14:29.514096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:41.622 [2024-07-26 12:14:29.514107] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:41.622 [2024-07-26 12:14:29.514129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:41.622 [2024-07-26 12:14:29.514141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:41.622 [2024-07-26 12:14:29.514152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:41.622 [2024-07-26 12:14:29.514163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:41.622 [2024-07-26 12:14:29.514173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:41.622 [2024-07-26 12:14:29.514183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:41.622 [2024-07-26 12:14:29.514193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:41.622 [2024-07-26 12:14:29.514203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:41.622 [2024-07-26 12:14:29.514213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:41.622 [2024-07-26 12:14:29.514223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:41.622 [2024-07-26 12:14:29.514233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:41.622 [2024-07-26 12:14:29.514243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:41.622 [2024-07-26 12:14:29.514253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:41.623 [2024-07-26 12:14:29.514262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:41.623 [2024-07-26 12:14:29.514273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:41.623 [2024-07-26 12:14:29.514282] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:41.623 [2024-07-26 12:14:29.514293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:41.623 [2024-07-26 12:14:29.514304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:41.623 [2024-07-26 12:14:29.514314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:41.623 [2024-07-26 12:14:29.514324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:41.623 [2024-07-26 12:14:29.514335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:41.623 [2024-07-26 12:14:29.514346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.623 [2024-07-26 12:14:29.514356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:41.623 [2024-07-26 12:14:29.514366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:20:41.623 [2024-07-26 12:14:29.514376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.623 [2024-07-26 12:14:29.570116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.623 [2024-07-26 12:14:29.570168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:41.623 [2024-07-26 12:14:29.570186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.772 ms 00:20:41.623 [2024-07-26 12:14:29.570197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.623 [2024-07-26 12:14:29.570368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.623 [2024-07-26 12:14:29.570385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:41.623 [2024-07-26 12:14:29.570396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:41.623 [2024-07-26 12:14:29.570405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.616904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.616952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:41.882 [2024-07-26 12:14:29.616966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.548 ms 00:20:41.882 [2024-07-26 12:14:29.616980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.617085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.617097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:41.882 [2024-07-26 12:14:29.617109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:41.882 [2024-07-26 12:14:29.617131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.617587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.617602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:41.882 [2024-07-26 12:14:29.617613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:20:41.882 [2024-07-26 12:14:29.617631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.617754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.617767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:41.882 [2024-07-26 12:14:29.617778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:41.882 [2024-07-26 12:14:29.617788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.639514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.639564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:41.882 [2024-07-26 12:14:29.639579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.737 ms 00:20:41.882 [2024-07-26 12:14:29.639589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.660621] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:41.882 [2024-07-26 12:14:29.660662] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:41.882 [2024-07-26 12:14:29.660678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.660688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:41.882 [2024-07-26 12:14:29.660700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.972 ms 00:20:41.882 [2024-07-26 12:14:29.660710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.690685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.690730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:41.882 [2024-07-26 12:14:29.690744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.934 ms 00:20:41.882 [2024-07-26 12:14:29.690755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.710815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.710872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:41.882 [2024-07-26 12:14:29.710886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.991 ms 00:20:41.882 [2024-07-26 12:14:29.710896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.730614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.730673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:41.882 [2024-07-26 12:14:29.730687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.669 ms 00:20:41.882 [2024-07-26 12:14:29.730697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.731611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.731643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:41.882 [2024-07-26 12:14:29.731656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:20:41.882 [2024-07-26 12:14:29.731666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.820526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.820596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:41.882 [2024-07-26 12:14:29.820613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.969 ms 00:20:41.882 [2024-07-26 12:14:29.820624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.833721] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:41.882 [2024-07-26 12:14:29.850469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.850527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:41.882 [2024-07-26 12:14:29.850543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.726 ms 00:20:41.882 [2024-07-26 12:14:29.850554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.850675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.850689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:41.882 [2024-07-26 12:14:29.850700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:41.882 [2024-07-26 12:14:29.850710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.850764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.850776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:41.882 [2024-07-26 12:14:29.850786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:41.882 [2024-07-26 12:14:29.850796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.850819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.850833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:41.882 [2024-07-26 12:14:29.850843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:41.882 [2024-07-26 12:14:29.850853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.882 [2024-07-26 12:14:29.850887] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:41.882 [2024-07-26 12:14:29.850899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.882 [2024-07-26 12:14:29.850909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:41.882 [2024-07-26 12:14:29.850918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:41.883 [2024-07-26 12:14:29.850928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.141 [2024-07-26 12:14:29.891797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.141 [2024-07-26 12:14:29.891856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:42.141 [2024-07-26 12:14:29.891872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.912 ms 00:20:42.141 [2024-07-26 12:14:29.891882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.141 [2024-07-26 12:14:29.892007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.141 [2024-07-26 12:14:29.892021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:42.141 [2024-07-26 12:14:29.892032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:42.141 [2024-07-26 12:14:29.892042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.141 [2024-07-26 12:14:29.892988] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.141 [2024-07-26 12:14:29.898301] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 420.646 ms, result 0 00:20:42.141 [2024-07-26 12:14:29.899188] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:42.141 [2024-07-26 12:14:29.918304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.141  Copying: 4096/4096 [kB] (average 27 MBps)[2024-07-26 12:14:30.068627] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:42.141 [2024-07-26 12:14:30.083602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.141 [2024-07-26 12:14:30.083648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:42.141 [2024-07-26 12:14:30.083663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:42.141 [2024-07-26 12:14:30.083673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.141 [2024-07-26 12:14:30.083705] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:42.141 [2024-07-26 12:14:30.087490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.141 [2024-07-26 12:14:30.087524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:42.141 [2024-07-26 12:14:30.087541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.773 ms 00:20:42.141 [2024-07-26 12:14:30.087556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.141 [2024-07-26 12:14:30.089418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.141 [2024-07-26 12:14:30.089456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:42.141 [2024-07-26 12:14:30.089468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.838 ms 00:20:42.141 [2024-07-26 12:14:30.089478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.142 [2024-07-26 12:14:30.092993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.142 [2024-07-26 12:14:30.093037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:42.142 [2024-07-26 12:14:30.093057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.500 ms 00:20:42.142 [2024-07-26 12:14:30.093067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.142 [2024-07-26 12:14:30.098805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.142 [2024-07-26 12:14:30.098842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:42.142 [2024-07-26 12:14:30.098854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.713 ms 00:20:42.142 [2024-07-26 12:14:30.098864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.401 [2024-07-26 12:14:30.136904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.401 [2024-07-26 12:14:30.136947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:42.401 [2024-07-26 12:14:30.136962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.035 ms 00:20:42.401 [2024-07-26 12:14:30.136972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.401 [2024-07-26 12:14:30.158571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.401 [2024-07-26 12:14:30.158611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:42.401 [2024-07-26 12:14:30.158625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.549 ms 00:20:42.401 [2024-07-26 12:14:30.158642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.401 [2024-07-26 12:14:30.158778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.402 [2024-07-26 12:14:30.158792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:42.402 [2024-07-26 12:14:30.158803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:20:42.402 [2024-07-26 12:14:30.158813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.402 [2024-07-26 12:14:30.197690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.402 [2024-07-26 12:14:30.197750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:42.402 [2024-07-26 12:14:30.197764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.921 ms 00:20:42.402 [2024-07-26 12:14:30.197774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.402 [2024-07-26 12:14:30.236630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.402 [2024-07-26 12:14:30.236669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:42.402 [2024-07-26 12:14:30.236682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.858 ms 00:20:42.402 [2024-07-26 12:14:30.236692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.402 [2024-07-26 12:14:30.275155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.402 [2024-07-26 12:14:30.275194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:42.402 [2024-07-26 12:14:30.275208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.465 ms 00:20:42.402 [2024-07-26 12:14:30.275217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.402 [2024-07-26 12:14:30.313305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.402 [2024-07-26 12:14:30.313366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:42.402 [2024-07-26 12:14:30.313383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.063 ms 00:20:42.402 [2024-07-26 12:14:30.313392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.402 [2024-07-26 12:14:30.313468] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:42.402 [2024-07-26 12:14:30.313487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.313997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:42.402 [2024-07-26 12:14:30.314304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:42.403 [2024-07-26 12:14:30.314622] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:42.403 [2024-07-26 12:14:30.314633] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc106026-f60a-43f0-978e-e006bfb6b3f6 00:20:42.403 [2024-07-26 12:14:30.314643] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:42.403 [2024-07-26 12:14:30.314653] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:42.403 [2024-07-26 12:14:30.314678] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:42.403 [2024-07-26 12:14:30.314688] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:42.403 [2024-07-26 12:14:30.314698] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:42.403 [2024-07-26 12:14:30.314709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:42.403 [2024-07-26 12:14:30.314719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:42.403 [2024-07-26 12:14:30.314728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:42.403 [2024-07-26 12:14:30.314737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:42.403 [2024-07-26 12:14:30.314748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.403 [2024-07-26 12:14:30.314758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:42.403 [2024-07-26 12:14:30.314773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:20:42.403 [2024-07-26 12:14:30.314783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.403 [2024-07-26 12:14:30.336418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.403 [2024-07-26 12:14:30.336460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:42.403 [2024-07-26 12:14:30.336475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.646 ms 00:20:42.403 [2024-07-26 12:14:30.336485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.403 [2024-07-26 12:14:30.337036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.403 [2024-07-26 12:14:30.337048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:42.403 [2024-07-26 12:14:30.337059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:20:42.403 [2024-07-26 12:14:30.337069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.662 [2024-07-26 12:14:30.383474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.662 [2024-07-26 12:14:30.383536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.662 [2024-07-26 12:14:30.383557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.662 [2024-07-26 12:14:30.383574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.662 [2024-07-26 12:14:30.383710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.662 [2024-07-26 12:14:30.383724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.662 [2024-07-26 12:14:30.383735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.662 [2024-07-26 12:14:30.383745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.662 [2024-07-26 12:14:30.383797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.662 [2024-07-26 12:14:30.383810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.662 [2024-07-26 12:14:30.383820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.662 [2024-07-26 12:14:30.383830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.662 [2024-07-26 12:14:30.383849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.662 [2024-07-26 12:14:30.383864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.662 [2024-07-26 12:14:30.383874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.662 [2024-07-26 12:14:30.383884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.662 [2024-07-26 12:14:30.503317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.663 [2024-07-26 12:14:30.503384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.663 [2024-07-26 12:14:30.503400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.663 [2024-07-26 12:14:30.503410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.663 [2024-07-26 12:14:30.605166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.663 [2024-07-26 12:14:30.605246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.663 [2024-07-26 12:14:30.605262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.663 [2024-07-26 12:14:30.605272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.663 [2024-07-26 12:14:30.605369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.663 [2024-07-26 12:14:30.605381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.663 [2024-07-26 12:14:30.605391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.663 [2024-07-26 12:14:30.605401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.663 [2024-07-26 12:14:30.605432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.663 [2024-07-26 12:14:30.605443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.663 [2024-07-26 12:14:30.605453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.663 [2024-07-26 12:14:30.605488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.663 [2024-07-26 12:14:30.605604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.663 [2024-07-26 12:14:30.605617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.663 [2024-07-26 12:14:30.605635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.663 [2024-07-26 12:14:30.605646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.663 [2024-07-26 12:14:30.605682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.663 [2024-07-26 12:14:30.605694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:42.663 [2024-07-26 12:14:30.605704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.663 [2024-07-26 12:14:30.605717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.663 [2024-07-26 12:14:30.605755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.663 [2024-07-26 12:14:30.605765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.663 [2024-07-26 12:14:30.605776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.663 [2024-07-26 12:14:30.605785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.663 [2024-07-26 12:14:30.605829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.663 [2024-07-26 12:14:30.605840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.663 [2024-07-26 12:14:30.605850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.663 [2024-07-26 12:14:30.605863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.663 [2024-07-26 12:14:30.605997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 523.236 ms, result 0 00:20:44.040 00:20:44.040 00:20:44.040 12:14:31 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79969 00:20:44.040 12:14:31 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:44.040 12:14:31 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79969 00:20:44.040 12:14:31 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79969 ']' 00:20:44.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.040 12:14:31 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.040 12:14:31 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:44.040 12:14:31 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.040 12:14:31 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:44.040 12:14:31 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:44.040 [2024-07-26 12:14:31.939007] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:20:44.040 [2024-07-26 12:14:31.939143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79969 ] 00:20:44.298 [2024-07-26 12:14:32.108309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.557 [2024-07-26 12:14:32.344786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.493 12:14:33 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:45.493 12:14:33 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:20:45.493 12:14:33 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:45.493 [2024-07-26 12:14:33.459273] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:45.493 [2024-07-26 12:14:33.459354] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:45.752 [2024-07-26 12:14:33.637450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.752 [2024-07-26 12:14:33.637512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:45.752 [2024-07-26 12:14:33.637529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:45.752 [2024-07-26 12:14:33.637542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.752 [2024-07-26 12:14:33.640687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.752 [2024-07-26 12:14:33.640730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:45.752 [2024-07-26 12:14:33.640744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.127 ms 00:20:45.752 [2024-07-26 12:14:33.640756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.752 [2024-07-26 12:14:33.640856] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:45.752 [2024-07-26 12:14:33.642000] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:45.752 [2024-07-26 12:14:33.642035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.752 [2024-07-26 12:14:33.642048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:45.752 [2024-07-26 12:14:33.642059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:20:45.752 [2024-07-26 12:14:33.642075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.752 [2024-07-26 12:14:33.643640] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:45.752 [2024-07-26 12:14:33.661617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.752 [2024-07-26 12:14:33.661676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:45.752 [2024-07-26 12:14:33.661695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.996 ms 00:20:45.752 [2024-07-26 12:14:33.661706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.752 [2024-07-26 12:14:33.661851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.752 [2024-07-26 12:14:33.661866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:45.752 [2024-07-26 12:14:33.661880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:45.752 [2024-07-26 12:14:33.661890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.752 [2024-07-26 12:14:33.669599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.752 [2024-07-26 12:14:33.669640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:45.752 [2024-07-26 12:14:33.669660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.662 ms 00:20:45.752 [2024-07-26 12:14:33.669670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.752 [2024-07-26 12:14:33.669810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.752 [2024-07-26 12:14:33.669825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:45.752 [2024-07-26 12:14:33.669839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:45.752 [2024-07-26 12:14:33.669853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.752 [2024-07-26 12:14:33.669887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.752 [2024-07-26 12:14:33.669898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:45.752 [2024-07-26 12:14:33.669910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:45.752 [2024-07-26 12:14:33.669920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.752 [2024-07-26 12:14:33.669952] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:45.752 [2024-07-26 12:14:33.675422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.752 [2024-07-26 12:14:33.675467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:45.752 [2024-07-26 12:14:33.675484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.489 ms 00:20:45.752 [2024-07-26 12:14:33.675501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.752 [2024-07-26 12:14:33.675592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.752 [2024-07-26 12:14:33.675615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:45.752 [2024-07-26 12:14:33.675634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:45.752 [2024-07-26 12:14:33.675650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.752 [2024-07-26 12:14:33.675680] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:45.752 [2024-07-26 12:14:33.675712] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:45.752 [2024-07-26 12:14:33.675763] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:45.752 [2024-07-26 12:14:33.675799] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:45.752 [2024-07-26 12:14:33.675911] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:45.752 [2024-07-26 12:14:33.675936] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:45.752 [2024-07-26 12:14:33.675949] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:45.752 [2024-07-26 12:14:33.675965] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:45.752 [2024-07-26 12:14:33.675977] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:45.752 [2024-07-26 12:14:33.675991] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:45.752 [2024-07-26 12:14:33.676010] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:45.752 [2024-07-26 12:14:33.676023] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:45.752 [2024-07-26 12:14:33.676033] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:45.752 [2024-07-26 12:14:33.676049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.752 [2024-07-26 12:14:33.676059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:45.752 [2024-07-26 12:14:33.676072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:20:45.752 [2024-07-26 12:14:33.676085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.752 [2024-07-26 12:14:33.676176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.752 [2024-07-26 12:14:33.676187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:45.752 [2024-07-26 12:14:33.676200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:45.752 [2024-07-26 12:14:33.676210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.752 [2024-07-26 12:14:33.676304] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:45.752 [2024-07-26 12:14:33.676318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:45.752 [2024-07-26 12:14:33.676331] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:45.752 [2024-07-26 12:14:33.676341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.752 [2024-07-26 12:14:33.676359] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:45.752 [2024-07-26 12:14:33.676368] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:45.752 [2024-07-26 12:14:33.676380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:45.752 [2024-07-26 12:14:33.676390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:45.752 [2024-07-26 12:14:33.676404] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:45.752 [2024-07-26 12:14:33.676414] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:45.752 [2024-07-26 12:14:33.676425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:45.752 [2024-07-26 12:14:33.676435] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:45.752 [2024-07-26 12:14:33.676448] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:45.752 [2024-07-26 12:14:33.676457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:45.753 [2024-07-26 12:14:33.676469] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:45.753 [2024-07-26 12:14:33.676478] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.753 [2024-07-26 12:14:33.676490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:45.753 [2024-07-26 12:14:33.676499] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:45.753 [2024-07-26 12:14:33.676510] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.753 [2024-07-26 12:14:33.676519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:45.753 [2024-07-26 12:14:33.676530] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:45.753 [2024-07-26 12:14:33.676539] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.753 [2024-07-26 12:14:33.676550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:45.753 [2024-07-26 12:14:33.676559] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:45.753 [2024-07-26 12:14:33.676573] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.753 [2024-07-26 12:14:33.676582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:45.753 [2024-07-26 12:14:33.676593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:45.753 [2024-07-26 12:14:33.676613] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.753 [2024-07-26 12:14:33.676624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:45.753 [2024-07-26 12:14:33.676634] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:45.753 [2024-07-26 12:14:33.676645] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:45.753 [2024-07-26 12:14:33.676654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:45.753 [2024-07-26 12:14:33.676665] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:45.753 [2024-07-26 12:14:33.676675] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:45.753 [2024-07-26 12:14:33.676686] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:45.753 [2024-07-26 12:14:33.676695] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:45.753 [2024-07-26 12:14:33.676706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:45.753 [2024-07-26 12:14:33.676715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:45.753 [2024-07-26 12:14:33.676727] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:45.753 [2024-07-26 12:14:33.676736] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.753 [2024-07-26 12:14:33.676749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:45.753 [2024-07-26 12:14:33.676758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:45.753 [2024-07-26 12:14:33.676770] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.753 [2024-07-26 12:14:33.676779] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:45.753 [2024-07-26 12:14:33.676792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:45.753 [2024-07-26 12:14:33.676802] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:45.753 [2024-07-26 12:14:33.676813] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:45.753 [2024-07-26 12:14:33.676823] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:45.753 [2024-07-26 12:14:33.676835] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:45.753 [2024-07-26 12:14:33.676845] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:45.753 [2024-07-26 12:14:33.676856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:45.753 [2024-07-26 12:14:33.676865] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:45.753 [2024-07-26 12:14:33.676877] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:45.753 [2024-07-26 12:14:33.676887] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:45.753 [2024-07-26 12:14:33.676903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:45.753 [2024-07-26 12:14:33.676915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:45.753 [2024-07-26 12:14:33.676930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:45.753 [2024-07-26 12:14:33.676940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:45.753 [2024-07-26 12:14:33.676953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:45.753 [2024-07-26 12:14:33.676963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:45.753 [2024-07-26 12:14:33.676975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:45.753 [2024-07-26 12:14:33.676985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:45.753 [2024-07-26 12:14:33.676997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:45.753 [2024-07-26 12:14:33.677007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:45.753 [2024-07-26 12:14:33.677020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:45.753 [2024-07-26 12:14:33.677029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:45.753 [2024-07-26 12:14:33.677042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:45.753 [2024-07-26 12:14:33.677051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:45.753 [2024-07-26 12:14:33.677064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:45.753 [2024-07-26 12:14:33.677074] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:45.753 [2024-07-26 12:14:33.677087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:45.753 [2024-07-26 12:14:33.677098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:45.753 [2024-07-26 12:14:33.677113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:45.753 [2024-07-26 12:14:33.677134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:45.753 [2024-07-26 12:14:33.677147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:45.753 [2024-07-26 12:14:33.677158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.753 [2024-07-26 12:14:33.677172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:45.753 [2024-07-26 12:14:33.677183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:20:45.753 [2024-07-26 12:14:33.677199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.753 [2024-07-26 12:14:33.721432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.753 [2024-07-26 12:14:33.721486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:45.753 [2024-07-26 12:14:33.721505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.241 ms 00:20:45.753 [2024-07-26 12:14:33.721518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.753 [2024-07-26 12:14:33.721672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.753 [2024-07-26 12:14:33.721689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:45.753 [2024-07-26 12:14:33.721701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:45.753 [2024-07-26 12:14:33.721714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.012 [2024-07-26 12:14:33.772667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.012 [2024-07-26 12:14:33.772736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:46.012 [2024-07-26 12:14:33.772756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.010 ms 00:20:46.012 [2024-07-26 12:14:33.772774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.012 [2024-07-26 12:14:33.772914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.012 [2024-07-26 12:14:33.772934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:46.012 [2024-07-26 12:14:33.772950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:46.012 [2024-07-26 12:14:33.772967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.012 [2024-07-26 12:14:33.773433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.012 [2024-07-26 12:14:33.773460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:46.012 [2024-07-26 12:14:33.773476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:20:46.012 [2024-07-26 12:14:33.773492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.012 [2024-07-26 12:14:33.773666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.012 [2024-07-26 12:14:33.773685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:46.013 [2024-07-26 12:14:33.773696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:20:46.013 [2024-07-26 12:14:33.773708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.013 [2024-07-26 12:14:33.796628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.013 [2024-07-26 12:14:33.796687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:46.013 [2024-07-26 12:14:33.796706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.932 ms 00:20:46.013 [2024-07-26 12:14:33.796723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.013 [2024-07-26 12:14:33.816637] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:46.013 [2024-07-26 12:14:33.816685] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:46.013 [2024-07-26 12:14:33.816705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.013 [2024-07-26 12:14:33.816718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:46.013 [2024-07-26 12:14:33.816729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.852 ms 00:20:46.013 [2024-07-26 12:14:33.816742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.013 [2024-07-26 12:14:33.847363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.013 [2024-07-26 12:14:33.847482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:46.013 [2024-07-26 12:14:33.847498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.586 ms 00:20:46.013 [2024-07-26 12:14:33.847514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.013 [2024-07-26 12:14:33.866298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.013 [2024-07-26 12:14:33.866370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:46.013 [2024-07-26 12:14:33.866396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.724 ms 00:20:46.013 [2024-07-26 12:14:33.866413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.013 [2024-07-26 12:14:33.886405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.013 [2024-07-26 12:14:33.886450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:46.013 [2024-07-26 12:14:33.886464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.941 ms 00:20:46.013 [2024-07-26 12:14:33.886476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.013 [2024-07-26 12:14:33.887343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.013 [2024-07-26 12:14:33.887377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:46.013 [2024-07-26 12:14:33.887389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:20:46.013 [2024-07-26 12:14:33.887401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.013 [2024-07-26 12:14:33.989057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.013 [2024-07-26 12:14:33.989147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:46.013 [2024-07-26 12:14:33.989166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.791 ms 00:20:46.013 [2024-07-26 12:14:33.989179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.272 [2024-07-26 12:14:34.002237] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:46.272 [2024-07-26 12:14:34.018948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.272 [2024-07-26 12:14:34.019006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:46.272 [2024-07-26 12:14:34.019027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.644 ms 00:20:46.272 [2024-07-26 12:14:34.019038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.272 [2024-07-26 12:14:34.019167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.272 [2024-07-26 12:14:34.019182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:46.272 [2024-07-26 12:14:34.019196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:46.272 [2024-07-26 12:14:34.019207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.272 [2024-07-26 12:14:34.019263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.272 [2024-07-26 12:14:34.019275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:46.272 [2024-07-26 12:14:34.019291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:46.272 [2024-07-26 12:14:34.019301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.272 [2024-07-26 12:14:34.019328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.272 [2024-07-26 12:14:34.019339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:46.273 [2024-07-26 12:14:34.019351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:46.273 [2024-07-26 12:14:34.019362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.273 [2024-07-26 12:14:34.019399] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:46.273 [2024-07-26 12:14:34.019410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.273 [2024-07-26 12:14:34.019425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:46.273 [2024-07-26 12:14:34.019436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:46.273 [2024-07-26 12:14:34.019450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.273 [2024-07-26 12:14:34.056536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.273 [2024-07-26 12:14:34.056587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:46.273 [2024-07-26 12:14:34.056601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.122 ms 00:20:46.273 [2024-07-26 12:14:34.056613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.273 [2024-07-26 12:14:34.056720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.273 [2024-07-26 12:14:34.056740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:46.273 [2024-07-26 12:14:34.056753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:46.273 [2024-07-26 12:14:34.056766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.273 [2024-07-26 12:14:34.057884] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:46.273 [2024-07-26 12:14:34.062984] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 420.794 ms, result 0 00:20:46.273 [2024-07-26 12:14:34.064100] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:46.273 Some configs were skipped because the RPC state that can call them passed over. 00:20:46.273 12:14:34 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:46.531 [2024-07-26 12:14:34.296722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.532 [2024-07-26 12:14:34.296779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:46.532 [2024-07-26 12:14:34.296801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.899 ms 00:20:46.532 [2024-07-26 12:14:34.296812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.532 [2024-07-26 12:14:34.296854] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.042 ms, result 0 00:20:46.532 true 00:20:46.532 12:14:34 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:46.791 [2024-07-26 12:14:34.510912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.791 [2024-07-26 12:14:34.510979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:46.791 [2024-07-26 12:14:34.510995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.131 ms 00:20:46.791 [2024-07-26 12:14:34.511007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.791 [2024-07-26 12:14:34.511065] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.286 ms, result 0 00:20:46.791 true 00:20:46.791 12:14:34 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79969 00:20:46.791 12:14:34 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79969 ']' 00:20:46.791 12:14:34 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79969 00:20:46.791 12:14:34 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:20:46.791 12:14:34 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:46.791 12:14:34 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79969 00:20:46.791 12:14:34 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:46.791 killing process with pid 79969 00:20:46.791 12:14:34 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:46.791 12:14:34 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79969' 00:20:46.791 12:14:34 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79969 00:20:46.791 12:14:34 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79969 00:20:48.170 [2024-07-26 12:14:35.709708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.170 [2024-07-26 12:14:35.709792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:48.170 [2024-07-26 12:14:35.709817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:48.170 [2024-07-26 12:14:35.709836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.170 [2024-07-26 12:14:35.709871] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:48.170 [2024-07-26 12:14:35.713973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.170 [2024-07-26 12:14:35.714021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:48.170 [2024-07-26 12:14:35.714038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.085 ms 00:20:48.170 [2024-07-26 12:14:35.714059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.170 [2024-07-26 12:14:35.714395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.170 [2024-07-26 12:14:35.714418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:48.170 [2024-07-26 12:14:35.714435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:20:48.170 [2024-07-26 12:14:35.714453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.170 [2024-07-26 12:14:35.717977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.170 [2024-07-26 12:14:35.718028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:48.170 [2024-07-26 12:14:35.718041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.505 ms 00:20:48.170 [2024-07-26 12:14:35.718053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.170 [2024-07-26 12:14:35.723957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.170 [2024-07-26 12:14:35.724003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:48.170 [2024-07-26 12:14:35.724016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.866 ms 00:20:48.170 [2024-07-26 12:14:35.724031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.170 [2024-07-26 12:14:35.739638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.170 [2024-07-26 12:14:35.739699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:48.170 [2024-07-26 12:14:35.739713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.579 ms 00:20:48.170 [2024-07-26 12:14:35.739729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.170 [2024-07-26 12:14:35.750653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.170 [2024-07-26 12:14:35.750702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:48.170 [2024-07-26 12:14:35.750716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.883 ms 00:20:48.170 [2024-07-26 12:14:35.750728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.170 [2024-07-26 12:14:35.750876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.170 [2024-07-26 12:14:35.750892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:48.170 [2024-07-26 12:14:35.750903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:48.170 [2024-07-26 12:14:35.750927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.170 [2024-07-26 12:14:35.767925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.170 [2024-07-26 12:14:35.767972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:48.170 [2024-07-26 12:14:35.767985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.005 ms 00:20:48.170 [2024-07-26 12:14:35.767997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.170 [2024-07-26 12:14:35.784136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.170 [2024-07-26 12:14:35.784182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:48.170 [2024-07-26 12:14:35.784196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.114 ms 00:20:48.170 [2024-07-26 12:14:35.784217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.170 [2024-07-26 12:14:35.800267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.170 [2024-07-26 12:14:35.800315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:48.170 [2024-07-26 12:14:35.800329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.024 ms 00:20:48.170 [2024-07-26 12:14:35.800341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.170 [2024-07-26 12:14:35.815246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.170 [2024-07-26 12:14:35.815291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:48.170 [2024-07-26 12:14:35.815304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.862 ms 00:20:48.170 [2024-07-26 12:14:35.815316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.170 [2024-07-26 12:14:35.815354] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:48.170 [2024-07-26 12:14:35.815374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:48.170 [2024-07-26 12:14:35.815706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.815994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:48.171 [2024-07-26 12:14:35.816602] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:48.171 [2024-07-26 12:14:35.816612] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc106026-f60a-43f0-978e-e006bfb6b3f6 00:20:48.171 [2024-07-26 12:14:35.816630] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:48.171 [2024-07-26 12:14:35.816640] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:48.171 [2024-07-26 12:14:35.816652] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:48.171 [2024-07-26 12:14:35.816662] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:48.171 [2024-07-26 12:14:35.816674] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:48.171 [2024-07-26 12:14:35.816684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:48.171 [2024-07-26 12:14:35.816696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:48.171 [2024-07-26 12:14:35.816705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:48.171 [2024-07-26 12:14:35.816729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:48.171 [2024-07-26 12:14:35.816738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.171 [2024-07-26 12:14:35.816750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:48.171 [2024-07-26 12:14:35.816761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:20:48.171 [2024-07-26 12:14:35.816775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.171 [2024-07-26 12:14:35.836251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.171 [2024-07-26 12:14:35.836301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:48.171 [2024-07-26 12:14:35.836319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.474 ms 00:20:48.171 [2024-07-26 12:14:35.836340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.171 [2024-07-26 12:14:35.836889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.172 [2024-07-26 12:14:35.836917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:48.172 [2024-07-26 12:14:35.836937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:20:48.172 [2024-07-26 12:14:35.836955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.172 [2024-07-26 12:14:35.905553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.172 [2024-07-26 12:14:35.905619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:48.172 [2024-07-26 12:14:35.905641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.172 [2024-07-26 12:14:35.905653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.172 [2024-07-26 12:14:35.905790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.172 [2024-07-26 12:14:35.905807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:48.172 [2024-07-26 12:14:35.905821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.172 [2024-07-26 12:14:35.905833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.172 [2024-07-26 12:14:35.905886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.172 [2024-07-26 12:14:35.905902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:48.172 [2024-07-26 12:14:35.905913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.172 [2024-07-26 12:14:35.905928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.172 [2024-07-26 12:14:35.905947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.172 [2024-07-26 12:14:35.905960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:48.172 [2024-07-26 12:14:35.905970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.172 [2024-07-26 12:14:35.905984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.172 [2024-07-26 12:14:36.028413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.172 [2024-07-26 12:14:36.028493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:48.172 [2024-07-26 12:14:36.028516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.172 [2024-07-26 12:14:36.028534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.172 [2024-07-26 12:14:36.133837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.172 [2024-07-26 12:14:36.133913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:48.172 [2024-07-26 12:14:36.133933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.172 [2024-07-26 12:14:36.133946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.172 [2024-07-26 12:14:36.134059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.172 [2024-07-26 12:14:36.134073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:48.172 [2024-07-26 12:14:36.134084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.172 [2024-07-26 12:14:36.134100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.172 [2024-07-26 12:14:36.134164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.172 [2024-07-26 12:14:36.134179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:48.172 [2024-07-26 12:14:36.134190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.172 [2024-07-26 12:14:36.134203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.172 [2024-07-26 12:14:36.134326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.172 [2024-07-26 12:14:36.134342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:48.172 [2024-07-26 12:14:36.134370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.172 [2024-07-26 12:14:36.134383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.172 [2024-07-26 12:14:36.134422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.172 [2024-07-26 12:14:36.134438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:48.172 [2024-07-26 12:14:36.134449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.172 [2024-07-26 12:14:36.134463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.172 [2024-07-26 12:14:36.134507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.172 [2024-07-26 12:14:36.134521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:48.172 [2024-07-26 12:14:36.134532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.172 [2024-07-26 12:14:36.134548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.172 [2024-07-26 12:14:36.134593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.172 [2024-07-26 12:14:36.134608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:48.172 [2024-07-26 12:14:36.134619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.172 [2024-07-26 12:14:36.134633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.172 [2024-07-26 12:14:36.134783] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 425.750 ms, result 0 00:20:49.549 12:14:37 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:49.549 [2024-07-26 12:14:37.302922] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:20:49.549 [2024-07-26 12:14:37.303046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80038 ] 00:20:49.549 [2024-07-26 12:14:37.471649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.808 [2024-07-26 12:14:37.707202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.375 [2024-07-26 12:14:38.096904] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:50.375 [2024-07-26 12:14:38.096977] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:50.375 [2024-07-26 12:14:38.260077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.375 [2024-07-26 12:14:38.260150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:50.375 [2024-07-26 12:14:38.260167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:50.375 [2024-07-26 12:14:38.260179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.375 [2024-07-26 12:14:38.263432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.375 [2024-07-26 12:14:38.263478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:50.375 [2024-07-26 12:14:38.263492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:20:50.375 [2024-07-26 12:14:38.263503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.375 [2024-07-26 12:14:38.263661] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:50.375 [2024-07-26 12:14:38.264818] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:50.375 [2024-07-26 12:14:38.264855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.375 [2024-07-26 12:14:38.264868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:50.375 [2024-07-26 12:14:38.264880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:20:50.375 [2024-07-26 12:14:38.264891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.375 [2024-07-26 12:14:38.266506] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:50.375 [2024-07-26 12:14:38.287908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.375 [2024-07-26 12:14:38.287953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:50.375 [2024-07-26 12:14:38.287974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.437 ms 00:20:50.375 [2024-07-26 12:14:38.287985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.375 [2024-07-26 12:14:38.288090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.375 [2024-07-26 12:14:38.288106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:50.375 [2024-07-26 12:14:38.288118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:50.375 [2024-07-26 12:14:38.288154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.375 [2024-07-26 12:14:38.295067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.375 [2024-07-26 12:14:38.295097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:50.375 [2024-07-26 12:14:38.295126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.879 ms 00:20:50.375 [2024-07-26 12:14:38.295152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.375 [2024-07-26 12:14:38.295255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.375 [2024-07-26 12:14:38.295272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:50.375 [2024-07-26 12:14:38.295284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:50.375 [2024-07-26 12:14:38.295294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.375 [2024-07-26 12:14:38.295329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.375 [2024-07-26 12:14:38.295340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:50.375 [2024-07-26 12:14:38.295355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:50.375 [2024-07-26 12:14:38.295366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.375 [2024-07-26 12:14:38.295391] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:50.375 [2024-07-26 12:14:38.301222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.375 [2024-07-26 12:14:38.301253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:50.375 [2024-07-26 12:14:38.301266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.846 ms 00:20:50.375 [2024-07-26 12:14:38.301276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.375 [2024-07-26 12:14:38.301350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.375 [2024-07-26 12:14:38.301363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:50.375 [2024-07-26 12:14:38.301375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:50.375 [2024-07-26 12:14:38.301386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.375 [2024-07-26 12:14:38.301410] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:50.375 [2024-07-26 12:14:38.301436] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:50.375 [2024-07-26 12:14:38.301475] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:50.375 [2024-07-26 12:14:38.301493] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:50.375 [2024-07-26 12:14:38.301598] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:50.375 [2024-07-26 12:14:38.301612] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:50.375 [2024-07-26 12:14:38.301637] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:50.375 [2024-07-26 12:14:38.301651] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:50.375 [2024-07-26 12:14:38.301664] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:50.375 [2024-07-26 12:14:38.301679] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:50.375 [2024-07-26 12:14:38.301690] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:50.375 [2024-07-26 12:14:38.301701] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:50.375 [2024-07-26 12:14:38.301711] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:50.375 [2024-07-26 12:14:38.301722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.375 [2024-07-26 12:14:38.301733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:50.375 [2024-07-26 12:14:38.301744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:20:50.375 [2024-07-26 12:14:38.301754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.375 [2024-07-26 12:14:38.301832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.375 [2024-07-26 12:14:38.301844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:50.375 [2024-07-26 12:14:38.301857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:50.375 [2024-07-26 12:14:38.301868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.375 [2024-07-26 12:14:38.301958] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:50.375 [2024-07-26 12:14:38.301971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:50.375 [2024-07-26 12:14:38.301983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.375 [2024-07-26 12:14:38.301993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.375 [2024-07-26 12:14:38.302005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:50.375 [2024-07-26 12:14:38.302015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:50.375 [2024-07-26 12:14:38.302026] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:50.375 [2024-07-26 12:14:38.302036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:50.375 [2024-07-26 12:14:38.302046] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:50.375 [2024-07-26 12:14:38.302056] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.375 [2024-07-26 12:14:38.302067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:50.375 [2024-07-26 12:14:38.302077] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:50.376 [2024-07-26 12:14:38.302087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.376 [2024-07-26 12:14:38.302096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:50.376 [2024-07-26 12:14:38.302107] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:50.376 [2024-07-26 12:14:38.302116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.376 [2024-07-26 12:14:38.302139] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:50.376 [2024-07-26 12:14:38.302150] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:50.376 [2024-07-26 12:14:38.302171] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.376 [2024-07-26 12:14:38.302182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:50.376 [2024-07-26 12:14:38.302192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:50.376 [2024-07-26 12:14:38.302202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.376 [2024-07-26 12:14:38.302212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:50.376 [2024-07-26 12:14:38.302222] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:50.376 [2024-07-26 12:14:38.302232] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.376 [2024-07-26 12:14:38.302241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:50.376 [2024-07-26 12:14:38.302251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:50.376 [2024-07-26 12:14:38.302261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.376 [2024-07-26 12:14:38.302271] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:50.376 [2024-07-26 12:14:38.302281] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:50.376 [2024-07-26 12:14:38.302291] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.376 [2024-07-26 12:14:38.302301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:50.376 [2024-07-26 12:14:38.302311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:50.376 [2024-07-26 12:14:38.302321] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.376 [2024-07-26 12:14:38.302330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:50.376 [2024-07-26 12:14:38.302340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:50.376 [2024-07-26 12:14:38.302350] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.376 [2024-07-26 12:14:38.302360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:50.376 [2024-07-26 12:14:38.302370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:50.376 [2024-07-26 12:14:38.302380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.376 [2024-07-26 12:14:38.302389] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:50.376 [2024-07-26 12:14:38.302399] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:50.376 [2024-07-26 12:14:38.302410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.376 [2024-07-26 12:14:38.302420] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:50.376 [2024-07-26 12:14:38.302431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:50.376 [2024-07-26 12:14:38.302441] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.376 [2024-07-26 12:14:38.302451] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.376 [2024-07-26 12:14:38.302466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:50.376 [2024-07-26 12:14:38.302476] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:50.376 [2024-07-26 12:14:38.302486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:50.376 [2024-07-26 12:14:38.302496] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:50.376 [2024-07-26 12:14:38.302506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:50.376 [2024-07-26 12:14:38.302516] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:50.376 [2024-07-26 12:14:38.302527] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:50.376 [2024-07-26 12:14:38.302540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.376 [2024-07-26 12:14:38.302552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:50.376 [2024-07-26 12:14:38.302563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:50.376 [2024-07-26 12:14:38.302574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:50.376 [2024-07-26 12:14:38.302585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:50.376 [2024-07-26 12:14:38.302596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:50.376 [2024-07-26 12:14:38.302607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:50.376 [2024-07-26 12:14:38.302617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:50.376 [2024-07-26 12:14:38.302629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:50.376 [2024-07-26 12:14:38.302640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:50.376 [2024-07-26 12:14:38.302650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:50.376 [2024-07-26 12:14:38.302661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:50.376 [2024-07-26 12:14:38.302672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:50.376 [2024-07-26 12:14:38.302684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:50.376 [2024-07-26 12:14:38.302696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:50.376 [2024-07-26 12:14:38.302706] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:50.376 [2024-07-26 12:14:38.302718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.376 [2024-07-26 12:14:38.302730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:50.376 [2024-07-26 12:14:38.302742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:50.376 [2024-07-26 12:14:38.302753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:50.376 [2024-07-26 12:14:38.302765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:50.376 [2024-07-26 12:14:38.302782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.376 [2024-07-26 12:14:38.302793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:50.376 [2024-07-26 12:14:38.302804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.881 ms 00:20:50.376 [2024-07-26 12:14:38.302815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.634 [2024-07-26 12:14:38.355979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.634 [2024-07-26 12:14:38.356033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:50.634 [2024-07-26 12:14:38.356054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.189 ms 00:20:50.634 [2024-07-26 12:14:38.356065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.634 [2024-07-26 12:14:38.356262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.634 [2024-07-26 12:14:38.356293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:50.634 [2024-07-26 12:14:38.356320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:50.634 [2024-07-26 12:14:38.356331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.634 [2024-07-26 12:14:38.408109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.634 [2024-07-26 12:14:38.408168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:50.634 [2024-07-26 12:14:38.408183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.835 ms 00:20:50.634 [2024-07-26 12:14:38.408198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.634 [2024-07-26 12:14:38.408302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.634 [2024-07-26 12:14:38.408315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:50.634 [2024-07-26 12:14:38.408328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:50.634 [2024-07-26 12:14:38.408338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.634 [2024-07-26 12:14:38.408783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.634 [2024-07-26 12:14:38.408797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:50.634 [2024-07-26 12:14:38.408809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:20:50.634 [2024-07-26 12:14:38.408820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.634 [2024-07-26 12:14:38.408949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.634 [2024-07-26 12:14:38.408963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:50.634 [2024-07-26 12:14:38.408973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:20:50.634 [2024-07-26 12:14:38.408984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.634 [2024-07-26 12:14:38.430901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.634 [2024-07-26 12:14:38.430950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:50.634 [2024-07-26 12:14:38.430965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.928 ms 00:20:50.634 [2024-07-26 12:14:38.430977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.634 [2024-07-26 12:14:38.452971] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:50.634 [2024-07-26 12:14:38.453029] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:50.634 [2024-07-26 12:14:38.453048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.634 [2024-07-26 12:14:38.453061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:50.634 [2024-07-26 12:14:38.453074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.945 ms 00:20:50.634 [2024-07-26 12:14:38.453085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.634 [2024-07-26 12:14:38.485508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.634 [2024-07-26 12:14:38.485563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:50.634 [2024-07-26 12:14:38.485580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.356 ms 00:20:50.634 [2024-07-26 12:14:38.485592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.634 [2024-07-26 12:14:38.506401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.634 [2024-07-26 12:14:38.506458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:50.634 [2024-07-26 12:14:38.506474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.706 ms 00:20:50.634 [2024-07-26 12:14:38.506485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.634 [2024-07-26 12:14:38.527435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.634 [2024-07-26 12:14:38.527484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:50.634 [2024-07-26 12:14:38.527499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.874 ms 00:20:50.634 [2024-07-26 12:14:38.527509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.634 [2024-07-26 12:14:38.528469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.634 [2024-07-26 12:14:38.528501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:50.634 [2024-07-26 12:14:38.528515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:20:50.634 [2024-07-26 12:14:38.528526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.892 [2024-07-26 12:14:38.622599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.892 [2024-07-26 12:14:38.622667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:50.892 [2024-07-26 12:14:38.622685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.192 ms 00:20:50.892 [2024-07-26 12:14:38.622697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.892 [2024-07-26 12:14:38.637541] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:50.892 [2024-07-26 12:14:38.655058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.892 [2024-07-26 12:14:38.655129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:50.892 [2024-07-26 12:14:38.655146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.266 ms 00:20:50.892 [2024-07-26 12:14:38.655157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.892 [2024-07-26 12:14:38.655284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.892 [2024-07-26 12:14:38.655298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:50.892 [2024-07-26 12:14:38.655310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:50.892 [2024-07-26 12:14:38.655321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.892 [2024-07-26 12:14:38.655378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.892 [2024-07-26 12:14:38.655390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:50.892 [2024-07-26 12:14:38.655401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:50.892 [2024-07-26 12:14:38.655412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.892 [2024-07-26 12:14:38.655435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.892 [2024-07-26 12:14:38.655450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:50.892 [2024-07-26 12:14:38.655461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:50.892 [2024-07-26 12:14:38.655488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.892 [2024-07-26 12:14:38.655524] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:50.892 [2024-07-26 12:14:38.655537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.893 [2024-07-26 12:14:38.655548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:50.893 [2024-07-26 12:14:38.655558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:50.893 [2024-07-26 12:14:38.655569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.893 [2024-07-26 12:14:38.696752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.893 [2024-07-26 12:14:38.696820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:50.893 [2024-07-26 12:14:38.696837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.224 ms 00:20:50.893 [2024-07-26 12:14:38.696848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.893 [2024-07-26 12:14:38.696984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.893 [2024-07-26 12:14:38.696998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:50.893 [2024-07-26 12:14:38.697010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:50.893 [2024-07-26 12:14:38.697021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.893 [2024-07-26 12:14:38.698020] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:50.893 [2024-07-26 12:14:38.703547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 438.344 ms, result 0 00:20:50.893 [2024-07-26 12:14:38.704281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:50.893 [2024-07-26 12:14:38.724002] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:00.025  Copying: 34/256 [MB] (34 MBps) Copying: 63/256 [MB] (29 MBps) Copying: 93/256 [MB] (29 MBps) Copying: 120/256 [MB] (27 MBps) Copying: 147/256 [MB] (27 MBps) Copying: 176/256 [MB] (29 MBps) Copying: 205/256 [MB] (29 MBps) Copying: 234/256 [MB] (28 MBps) Copying: 256/256 [MB] (average 29 MBps)[2024-07-26 12:14:47.928591] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:00.025 [2024-07-26 12:14:47.944713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.025 [2024-07-26 12:14:47.944773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:00.025 [2024-07-26 12:14:47.944794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:00.025 [2024-07-26 12:14:47.944810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.025 [2024-07-26 12:14:47.944855] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:00.025 [2024-07-26 12:14:47.948891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.025 [2024-07-26 12:14:47.948929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:00.025 [2024-07-26 12:14:47.948942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.018 ms 00:21:00.025 [2024-07-26 12:14:47.948952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.025 [2024-07-26 12:14:47.949201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.025 [2024-07-26 12:14:47.949215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:00.025 [2024-07-26 12:14:47.949225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:21:00.025 [2024-07-26 12:14:47.949235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.025 [2024-07-26 12:14:47.952265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.025 [2024-07-26 12:14:47.952290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:00.025 [2024-07-26 12:14:47.952308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.017 ms 00:21:00.025 [2024-07-26 12:14:47.952318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.025 [2024-07-26 12:14:47.958342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.025 [2024-07-26 12:14:47.958385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:00.025 [2024-07-26 12:14:47.958397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.010 ms 00:21:00.025 [2024-07-26 12:14:47.958408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.025 [2024-07-26 12:14:47.996441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.025 [2024-07-26 12:14:47.996492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:00.026 [2024-07-26 12:14:47.996507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.013 ms 00:21:00.026 [2024-07-26 12:14:47.996518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.286 [2024-07-26 12:14:48.018363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.286 [2024-07-26 12:14:48.018421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:00.286 [2024-07-26 12:14:48.018438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.809 ms 00:21:00.286 [2024-07-26 12:14:48.018455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.286 [2024-07-26 12:14:48.018612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.286 [2024-07-26 12:14:48.018626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:00.286 [2024-07-26 12:14:48.018637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:21:00.286 [2024-07-26 12:14:48.018647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.286 [2024-07-26 12:14:48.055233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.286 [2024-07-26 12:14:48.055279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:00.286 [2024-07-26 12:14:48.055293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.625 ms 00:21:00.286 [2024-07-26 12:14:48.055303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.286 [2024-07-26 12:14:48.091290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.286 [2024-07-26 12:14:48.091336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:00.286 [2024-07-26 12:14:48.091350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.983 ms 00:21:00.286 [2024-07-26 12:14:48.091360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.286 [2024-07-26 12:14:48.127713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.286 [2024-07-26 12:14:48.127769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:00.286 [2024-07-26 12:14:48.127785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.352 ms 00:21:00.286 [2024-07-26 12:14:48.127795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.286 [2024-07-26 12:14:48.165194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.286 [2024-07-26 12:14:48.165245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:00.286 [2024-07-26 12:14:48.165259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.361 ms 00:21:00.286 [2024-07-26 12:14:48.165270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.286 [2024-07-26 12:14:48.165330] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:00.286 [2024-07-26 12:14:48.165354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:00.286 [2024-07-26 12:14:48.165563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.165990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:00.287 [2024-07-26 12:14:48.166466] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:00.287 [2024-07-26 12:14:48.166482] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc106026-f60a-43f0-978e-e006bfb6b3f6 00:21:00.287 [2024-07-26 12:14:48.166493] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:00.287 [2024-07-26 12:14:48.166503] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:00.287 [2024-07-26 12:14:48.166525] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:00.287 [2024-07-26 12:14:48.166536] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:00.287 [2024-07-26 12:14:48.166546] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:00.287 [2024-07-26 12:14:48.166556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:00.287 [2024-07-26 12:14:48.166566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:00.287 [2024-07-26 12:14:48.166575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:00.288 [2024-07-26 12:14:48.166584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:00.288 [2024-07-26 12:14:48.166594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.288 [2024-07-26 12:14:48.166604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:00.288 [2024-07-26 12:14:48.166619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:21:00.288 [2024-07-26 12:14:48.166629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.288 [2024-07-26 12:14:48.187396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.288 [2024-07-26 12:14:48.187449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:00.288 [2024-07-26 12:14:48.187463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.778 ms 00:21:00.288 [2024-07-26 12:14:48.187474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.288 [2024-07-26 12:14:48.188012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.288 [2024-07-26 12:14:48.188034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:00.288 [2024-07-26 12:14:48.188045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:21:00.288 [2024-07-26 12:14:48.188054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.288 [2024-07-26 12:14:48.238150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.288 [2024-07-26 12:14:48.238202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.288 [2024-07-26 12:14:48.238216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.288 [2024-07-26 12:14:48.238226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.288 [2024-07-26 12:14:48.238318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.288 [2024-07-26 12:14:48.238333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.288 [2024-07-26 12:14:48.238345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.288 [2024-07-26 12:14:48.238355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.288 [2024-07-26 12:14:48.238404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.288 [2024-07-26 12:14:48.238416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.288 [2024-07-26 12:14:48.238427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.288 [2024-07-26 12:14:48.238437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.288 [2024-07-26 12:14:48.238456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.288 [2024-07-26 12:14:48.238466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.288 [2024-07-26 12:14:48.238480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.288 [2024-07-26 12:14:48.238490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.547 [2024-07-26 12:14:48.362750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.548 [2024-07-26 12:14:48.362806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.548 [2024-07-26 12:14:48.362821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.548 [2024-07-26 12:14:48.362832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.548 [2024-07-26 12:14:48.469892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.548 [2024-07-26 12:14:48.469963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.548 [2024-07-26 12:14:48.469978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.548 [2024-07-26 12:14:48.469988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.548 [2024-07-26 12:14:48.470078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.548 [2024-07-26 12:14:48.470090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.548 [2024-07-26 12:14:48.470101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.548 [2024-07-26 12:14:48.470111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.548 [2024-07-26 12:14:48.470151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.548 [2024-07-26 12:14:48.470162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.548 [2024-07-26 12:14:48.470172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.548 [2024-07-26 12:14:48.470186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.548 [2024-07-26 12:14:48.470292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.548 [2024-07-26 12:14:48.470305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.548 [2024-07-26 12:14:48.470316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.548 [2024-07-26 12:14:48.470325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.548 [2024-07-26 12:14:48.470360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.548 [2024-07-26 12:14:48.470371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:00.548 [2024-07-26 12:14:48.470382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.548 [2024-07-26 12:14:48.470392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.548 [2024-07-26 12:14:48.470433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.548 [2024-07-26 12:14:48.470444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.548 [2024-07-26 12:14:48.470454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.548 [2024-07-26 12:14:48.470464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.548 [2024-07-26 12:14:48.470507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:00.548 [2024-07-26 12:14:48.470518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.548 [2024-07-26 12:14:48.470529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:00.548 [2024-07-26 12:14:48.470541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.548 [2024-07-26 12:14:48.470676] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 526.836 ms, result 0 00:21:01.926 00:21:01.926 00:21:01.926 12:14:49 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:02.186 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:02.186 12:14:50 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:02.186 12:14:50 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:02.186 12:14:50 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:02.186 12:14:50 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:02.445 12:14:50 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:02.445 12:14:50 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:02.445 12:14:50 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79969 00:21:02.445 Process with pid 79969 is not found 00:21:02.445 12:14:50 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79969 ']' 00:21:02.446 12:14:50 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79969 00:21:02.446 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79969) - No such process 00:21:02.446 12:14:50 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 79969 is not found' 00:21:02.446 00:21:02.446 real 1m7.122s 00:21:02.446 user 1m29.494s 00:21:02.446 sys 0m6.397s 00:21:02.446 12:14:50 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:02.446 12:14:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:02.446 ************************************ 00:21:02.446 END TEST ftl_trim 00:21:02.446 ************************************ 00:21:02.446 12:14:50 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:02.446 12:14:50 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:02.446 12:14:50 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:02.446 12:14:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:02.446 ************************************ 00:21:02.446 START TEST ftl_restore 00:21:02.446 ************************************ 00:21:02.446 12:14:50 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:02.705 * Looking for test storage... 00:21:02.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.buSnEMpccO 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=80233 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 80233 00:21:02.705 12:14:50 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:02.705 12:14:50 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 80233 ']' 00:21:02.706 12:14:50 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.706 12:14:50 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:02.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.706 12:14:50 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.706 12:14:50 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:02.706 12:14:50 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:02.706 [2024-07-26 12:14:50.613711] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:21:02.706 [2024-07-26 12:14:50.613845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80233 ] 00:21:02.965 [2024-07-26 12:14:50.783722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.223 [2024-07-26 12:14:51.017222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.163 12:14:51 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:04.163 12:14:51 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:21:04.163 12:14:51 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:04.163 12:14:51 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:04.163 12:14:51 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:04.163 12:14:51 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:04.163 12:14:51 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:04.163 12:14:51 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:04.422 12:14:52 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:04.422 12:14:52 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:04.422 12:14:52 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:04.422 12:14:52 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:04.422 12:14:52 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:04.422 12:14:52 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:21:04.422 12:14:52 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:21:04.422 12:14:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:04.422 12:14:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:04.422 { 00:21:04.422 "name": "nvme0n1", 00:21:04.422 "aliases": [ 00:21:04.422 "3bcb4a49-8b8c-4f98-9cc0-4540ff8d8521" 00:21:04.423 ], 00:21:04.423 "product_name": "NVMe disk", 00:21:04.423 "block_size": 4096, 00:21:04.423 "num_blocks": 1310720, 00:21:04.423 "uuid": "3bcb4a49-8b8c-4f98-9cc0-4540ff8d8521", 00:21:04.423 "assigned_rate_limits": { 00:21:04.423 "rw_ios_per_sec": 0, 00:21:04.423 "rw_mbytes_per_sec": 0, 00:21:04.423 "r_mbytes_per_sec": 0, 00:21:04.423 "w_mbytes_per_sec": 0 00:21:04.423 }, 00:21:04.423 "claimed": true, 00:21:04.423 "claim_type": "read_many_write_one", 00:21:04.423 "zoned": false, 00:21:04.423 "supported_io_types": { 00:21:04.423 "read": true, 00:21:04.423 "write": true, 00:21:04.423 "unmap": true, 00:21:04.423 "flush": true, 00:21:04.423 "reset": true, 00:21:04.423 "nvme_admin": true, 00:21:04.423 "nvme_io": true, 00:21:04.423 "nvme_io_md": false, 00:21:04.423 "write_zeroes": true, 00:21:04.423 "zcopy": false, 00:21:04.423 "get_zone_info": false, 00:21:04.423 "zone_management": false, 00:21:04.423 "zone_append": false, 00:21:04.423 "compare": true, 00:21:04.423 "compare_and_write": false, 00:21:04.423 "abort": true, 00:21:04.423 "seek_hole": false, 00:21:04.423 "seek_data": false, 00:21:04.423 "copy": true, 00:21:04.423 "nvme_iov_md": false 00:21:04.423 }, 00:21:04.423 "driver_specific": { 00:21:04.423 "nvme": [ 00:21:04.423 { 00:21:04.423 "pci_address": "0000:00:11.0", 00:21:04.423 "trid": { 00:21:04.423 "trtype": "PCIe", 00:21:04.423 "traddr": "0000:00:11.0" 00:21:04.423 }, 00:21:04.423 "ctrlr_data": { 00:21:04.423 "cntlid": 0, 00:21:04.423 "vendor_id": "0x1b36", 00:21:04.423 "model_number": "QEMU NVMe Ctrl", 00:21:04.423 "serial_number": "12341", 00:21:04.423 "firmware_revision": "8.0.0", 00:21:04.423 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:04.423 "oacs": { 00:21:04.423 "security": 0, 00:21:04.423 "format": 1, 00:21:04.423 "firmware": 0, 00:21:04.423 "ns_manage": 1 00:21:04.423 }, 00:21:04.423 "multi_ctrlr": false, 00:21:04.423 "ana_reporting": false 00:21:04.423 }, 00:21:04.423 "vs": { 00:21:04.423 "nvme_version": "1.4" 00:21:04.423 }, 00:21:04.423 "ns_data": { 00:21:04.423 "id": 1, 00:21:04.423 "can_share": false 00:21:04.423 } 00:21:04.423 } 00:21:04.423 ], 00:21:04.423 "mp_policy": "active_passive" 00:21:04.423 } 00:21:04.423 } 00:21:04.423 ]' 00:21:04.423 12:14:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:04.682 12:14:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:21:04.682 12:14:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:04.682 12:14:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:04.682 12:14:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:04.682 12:14:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:21:04.682 12:14:52 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:04.682 12:14:52 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:04.682 12:14:52 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:04.682 12:14:52 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:04.682 12:14:52 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:04.682 12:14:52 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=da950123-7fee-4d9f-b8b3-40b2d4c9e80f 00:21:04.682 12:14:52 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:04.682 12:14:52 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da950123-7fee-4d9f-b8b3-40b2d4c9e80f 00:21:04.941 12:14:52 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:05.200 12:14:53 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=7f3c9051-084a-4b90-b0ae-016595e75ac2 00:21:05.200 12:14:53 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7f3c9051-084a-4b90-b0ae-016595e75ac2 00:21:05.465 12:14:53 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=c6b650dd-d46c-4c50-9a3d-bb26a6faa673 00:21:05.465 12:14:53 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:05.465 12:14:53 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c6b650dd-d46c-4c50-9a3d-bb26a6faa673 00:21:05.465 12:14:53 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:05.465 12:14:53 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:05.465 12:14:53 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=c6b650dd-d46c-4c50-9a3d-bb26a6faa673 00:21:05.465 12:14:53 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:05.465 12:14:53 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size c6b650dd-d46c-4c50-9a3d-bb26a6faa673 00:21:05.465 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=c6b650dd-d46c-4c50-9a3d-bb26a6faa673 00:21:05.465 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:05.465 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:21:05.465 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:21:05.465 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c6b650dd-d46c-4c50-9a3d-bb26a6faa673 00:21:05.727 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:05.727 { 00:21:05.727 "name": "c6b650dd-d46c-4c50-9a3d-bb26a6faa673", 00:21:05.727 "aliases": [ 00:21:05.727 "lvs/nvme0n1p0" 00:21:05.727 ], 00:21:05.727 "product_name": "Logical Volume", 00:21:05.727 "block_size": 4096, 00:21:05.727 "num_blocks": 26476544, 00:21:05.727 "uuid": "c6b650dd-d46c-4c50-9a3d-bb26a6faa673", 00:21:05.727 "assigned_rate_limits": { 00:21:05.727 "rw_ios_per_sec": 0, 00:21:05.727 "rw_mbytes_per_sec": 0, 00:21:05.727 "r_mbytes_per_sec": 0, 00:21:05.727 "w_mbytes_per_sec": 0 00:21:05.727 }, 00:21:05.727 "claimed": false, 00:21:05.727 "zoned": false, 00:21:05.727 "supported_io_types": { 00:21:05.727 "read": true, 00:21:05.727 "write": true, 00:21:05.727 "unmap": true, 00:21:05.727 "flush": false, 00:21:05.727 "reset": true, 00:21:05.727 "nvme_admin": false, 00:21:05.727 "nvme_io": false, 00:21:05.727 "nvme_io_md": false, 00:21:05.727 "write_zeroes": true, 00:21:05.727 "zcopy": false, 00:21:05.727 "get_zone_info": false, 00:21:05.727 "zone_management": false, 00:21:05.727 "zone_append": false, 00:21:05.727 "compare": false, 00:21:05.727 "compare_and_write": false, 00:21:05.727 "abort": false, 00:21:05.727 "seek_hole": true, 00:21:05.727 "seek_data": true, 00:21:05.727 "copy": false, 00:21:05.727 "nvme_iov_md": false 00:21:05.727 }, 00:21:05.727 "driver_specific": { 00:21:05.727 "lvol": { 00:21:05.727 "lvol_store_uuid": "7f3c9051-084a-4b90-b0ae-016595e75ac2", 00:21:05.727 "base_bdev": "nvme0n1", 00:21:05.727 "thin_provision": true, 00:21:05.727 "num_allocated_clusters": 0, 00:21:05.727 "snapshot": false, 00:21:05.727 "clone": false, 00:21:05.727 "esnap_clone": false 00:21:05.727 } 00:21:05.727 } 00:21:05.727 } 00:21:05.727 ]' 00:21:05.727 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:05.727 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:21:05.727 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:05.727 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:05.727 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:05.727 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:21:05.727 12:14:53 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:05.727 12:14:53 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:05.727 12:14:53 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:05.987 12:14:53 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:05.987 12:14:53 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:05.987 12:14:53 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size c6b650dd-d46c-4c50-9a3d-bb26a6faa673 00:21:05.987 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=c6b650dd-d46c-4c50-9a3d-bb26a6faa673 00:21:05.987 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:05.987 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:21:05.987 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:21:05.987 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c6b650dd-d46c-4c50-9a3d-bb26a6faa673 00:21:05.987 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:05.987 { 00:21:05.987 "name": "c6b650dd-d46c-4c50-9a3d-bb26a6faa673", 00:21:05.987 "aliases": [ 00:21:05.987 "lvs/nvme0n1p0" 00:21:05.987 ], 00:21:05.987 "product_name": "Logical Volume", 00:21:05.987 "block_size": 4096, 00:21:05.987 "num_blocks": 26476544, 00:21:05.987 "uuid": "c6b650dd-d46c-4c50-9a3d-bb26a6faa673", 00:21:05.987 "assigned_rate_limits": { 00:21:05.987 "rw_ios_per_sec": 0, 00:21:05.987 "rw_mbytes_per_sec": 0, 00:21:05.987 "r_mbytes_per_sec": 0, 00:21:05.987 "w_mbytes_per_sec": 0 00:21:05.987 }, 00:21:05.987 "claimed": false, 00:21:05.987 "zoned": false, 00:21:05.987 "supported_io_types": { 00:21:05.987 "read": true, 00:21:05.987 "write": true, 00:21:05.987 "unmap": true, 00:21:05.987 "flush": false, 00:21:05.987 "reset": true, 00:21:05.987 "nvme_admin": false, 00:21:05.987 "nvme_io": false, 00:21:05.987 "nvme_io_md": false, 00:21:05.987 "write_zeroes": true, 00:21:05.987 "zcopy": false, 00:21:05.987 "get_zone_info": false, 00:21:05.987 "zone_management": false, 00:21:05.987 "zone_append": false, 00:21:05.987 "compare": false, 00:21:05.987 "compare_and_write": false, 00:21:05.987 "abort": false, 00:21:05.987 "seek_hole": true, 00:21:05.987 "seek_data": true, 00:21:05.987 "copy": false, 00:21:05.987 "nvme_iov_md": false 00:21:05.987 }, 00:21:05.987 "driver_specific": { 00:21:05.987 "lvol": { 00:21:05.987 "lvol_store_uuid": "7f3c9051-084a-4b90-b0ae-016595e75ac2", 00:21:05.987 "base_bdev": "nvme0n1", 00:21:05.987 "thin_provision": true, 00:21:05.987 "num_allocated_clusters": 0, 00:21:05.987 "snapshot": false, 00:21:05.987 "clone": false, 00:21:05.987 "esnap_clone": false 00:21:05.987 } 00:21:05.987 } 00:21:05.987 } 00:21:05.987 ]' 00:21:05.987 12:14:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:06.246 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:21:06.246 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:06.246 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:06.246 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:06.246 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:21:06.246 12:14:54 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:06.246 12:14:54 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:06.504 12:14:54 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:06.504 12:14:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size c6b650dd-d46c-4c50-9a3d-bb26a6faa673 00:21:06.504 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=c6b650dd-d46c-4c50-9a3d-bb26a6faa673 00:21:06.504 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:06.504 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:21:06.504 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:21:06.504 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c6b650dd-d46c-4c50-9a3d-bb26a6faa673 00:21:06.504 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:06.504 { 00:21:06.504 "name": "c6b650dd-d46c-4c50-9a3d-bb26a6faa673", 00:21:06.504 "aliases": [ 00:21:06.504 "lvs/nvme0n1p0" 00:21:06.504 ], 00:21:06.504 "product_name": "Logical Volume", 00:21:06.504 "block_size": 4096, 00:21:06.504 "num_blocks": 26476544, 00:21:06.504 "uuid": "c6b650dd-d46c-4c50-9a3d-bb26a6faa673", 00:21:06.504 "assigned_rate_limits": { 00:21:06.504 "rw_ios_per_sec": 0, 00:21:06.504 "rw_mbytes_per_sec": 0, 00:21:06.504 "r_mbytes_per_sec": 0, 00:21:06.504 "w_mbytes_per_sec": 0 00:21:06.504 }, 00:21:06.504 "claimed": false, 00:21:06.504 "zoned": false, 00:21:06.504 "supported_io_types": { 00:21:06.504 "read": true, 00:21:06.504 "write": true, 00:21:06.504 "unmap": true, 00:21:06.504 "flush": false, 00:21:06.504 "reset": true, 00:21:06.504 "nvme_admin": false, 00:21:06.504 "nvme_io": false, 00:21:06.504 "nvme_io_md": false, 00:21:06.504 "write_zeroes": true, 00:21:06.504 "zcopy": false, 00:21:06.504 "get_zone_info": false, 00:21:06.504 "zone_management": false, 00:21:06.504 "zone_append": false, 00:21:06.504 "compare": false, 00:21:06.504 "compare_and_write": false, 00:21:06.504 "abort": false, 00:21:06.504 "seek_hole": true, 00:21:06.504 "seek_data": true, 00:21:06.504 "copy": false, 00:21:06.504 "nvme_iov_md": false 00:21:06.504 }, 00:21:06.504 "driver_specific": { 00:21:06.504 "lvol": { 00:21:06.504 "lvol_store_uuid": "7f3c9051-084a-4b90-b0ae-016595e75ac2", 00:21:06.504 "base_bdev": "nvme0n1", 00:21:06.504 "thin_provision": true, 00:21:06.504 "num_allocated_clusters": 0, 00:21:06.504 "snapshot": false, 00:21:06.504 "clone": false, 00:21:06.504 "esnap_clone": false 00:21:06.504 } 00:21:06.504 } 00:21:06.504 } 00:21:06.504 ]' 00:21:06.504 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:06.504 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:21:06.504 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:06.763 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:06.763 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:06.763 12:14:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:21:06.763 12:14:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:06.763 12:14:54 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c6b650dd-d46c-4c50-9a3d-bb26a6faa673 --l2p_dram_limit 10' 00:21:06.763 12:14:54 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:06.763 12:14:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:06.763 12:14:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:06.763 12:14:54 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:06.763 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:06.763 12:14:54 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c6b650dd-d46c-4c50-9a3d-bb26a6faa673 --l2p_dram_limit 10 -c nvc0n1p0 00:21:06.763 [2024-07-26 12:14:54.660411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.763 [2024-07-26 12:14:54.660474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:06.763 [2024-07-26 12:14:54.660490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:06.763 [2024-07-26 12:14:54.660504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.763 [2024-07-26 12:14:54.660567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.763 [2024-07-26 12:14:54.660582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:06.763 [2024-07-26 12:14:54.660593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:06.763 [2024-07-26 12:14:54.660606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.763 [2024-07-26 12:14:54.660628] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:06.763 [2024-07-26 12:14:54.661851] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:06.763 [2024-07-26 12:14:54.661880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.763 [2024-07-26 12:14:54.661897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:06.763 [2024-07-26 12:14:54.661909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:21:06.763 [2024-07-26 12:14:54.661921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.763 [2024-07-26 12:14:54.661997] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 84b0b1fd-0692-4636-ae1e-94f73c17c0ad 00:21:06.763 [2024-07-26 12:14:54.663397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.763 [2024-07-26 12:14:54.663431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:06.763 [2024-07-26 12:14:54.663446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:06.763 [2024-07-26 12:14:54.663456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.763 [2024-07-26 12:14:54.670882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.763 [2024-07-26 12:14:54.670920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:06.763 [2024-07-26 12:14:54.670936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.377 ms 00:21:06.763 [2024-07-26 12:14:54.670947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.763 [2024-07-26 12:14:54.671051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.763 [2024-07-26 12:14:54.671065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:06.763 [2024-07-26 12:14:54.671078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:21:06.763 [2024-07-26 12:14:54.671088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.763 [2024-07-26 12:14:54.671178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.763 [2024-07-26 12:14:54.671193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:06.763 [2024-07-26 12:14:54.671209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:06.763 [2024-07-26 12:14:54.671220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.763 [2024-07-26 12:14:54.671249] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:06.763 [2024-07-26 12:14:54.677219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.763 [2024-07-26 12:14:54.677261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:06.763 [2024-07-26 12:14:54.677273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.990 ms 00:21:06.763 [2024-07-26 12:14:54.677285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.763 [2024-07-26 12:14:54.677326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.763 [2024-07-26 12:14:54.677340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:06.763 [2024-07-26 12:14:54.677350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:06.763 [2024-07-26 12:14:54.677362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.763 [2024-07-26 12:14:54.677410] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:06.763 [2024-07-26 12:14:54.677545] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:06.763 [2024-07-26 12:14:54.677559] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:06.763 [2024-07-26 12:14:54.677578] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:06.763 [2024-07-26 12:14:54.677592] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:06.763 [2024-07-26 12:14:54.677606] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:06.763 [2024-07-26 12:14:54.677617] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:06.763 [2024-07-26 12:14:54.677642] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:06.763 [2024-07-26 12:14:54.677651] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:06.763 [2024-07-26 12:14:54.677663] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:06.763 [2024-07-26 12:14:54.677673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.763 [2024-07-26 12:14:54.677686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:06.763 [2024-07-26 12:14:54.677697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:21:06.763 [2024-07-26 12:14:54.677708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.763 [2024-07-26 12:14:54.677779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.763 [2024-07-26 12:14:54.677792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:06.763 [2024-07-26 12:14:54.677802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:06.763 [2024-07-26 12:14:54.677817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.763 [2024-07-26 12:14:54.677901] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:06.763 [2024-07-26 12:14:54.677918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:06.763 [2024-07-26 12:14:54.677940] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:06.763 [2024-07-26 12:14:54.677953] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.763 [2024-07-26 12:14:54.677963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:06.763 [2024-07-26 12:14:54.677975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:06.763 [2024-07-26 12:14:54.677984] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:06.763 [2024-07-26 12:14:54.677996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:06.763 [2024-07-26 12:14:54.678006] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:06.763 [2024-07-26 12:14:54.678017] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:06.763 [2024-07-26 12:14:54.678026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:06.763 [2024-07-26 12:14:54.678040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:06.763 [2024-07-26 12:14:54.678049] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:06.764 [2024-07-26 12:14:54.678062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:06.764 [2024-07-26 12:14:54.678072] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:06.764 [2024-07-26 12:14:54.678083] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.764 [2024-07-26 12:14:54.678092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:06.764 [2024-07-26 12:14:54.678106] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:06.764 [2024-07-26 12:14:54.678115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.764 [2024-07-26 12:14:54.678138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:06.764 [2024-07-26 12:14:54.678148] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:06.764 [2024-07-26 12:14:54.678159] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.764 [2024-07-26 12:14:54.678168] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:06.764 [2024-07-26 12:14:54.678180] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:06.764 [2024-07-26 12:14:54.678189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.764 [2024-07-26 12:14:54.678201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:06.764 [2024-07-26 12:14:54.678210] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:06.764 [2024-07-26 12:14:54.678221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.764 [2024-07-26 12:14:54.678231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:06.764 [2024-07-26 12:14:54.678242] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:06.764 [2024-07-26 12:14:54.678252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:06.764 [2024-07-26 12:14:54.678263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:06.764 [2024-07-26 12:14:54.678272] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:06.764 [2024-07-26 12:14:54.678286] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:06.764 [2024-07-26 12:14:54.678295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:06.764 [2024-07-26 12:14:54.678307] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:06.764 [2024-07-26 12:14:54.678316] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:06.764 [2024-07-26 12:14:54.678328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:06.764 [2024-07-26 12:14:54.678337] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:06.764 [2024-07-26 12:14:54.678349] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.764 [2024-07-26 12:14:54.678358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:06.764 [2024-07-26 12:14:54.678370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:06.764 [2024-07-26 12:14:54.678379] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.764 [2024-07-26 12:14:54.678390] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:06.764 [2024-07-26 12:14:54.678400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:06.764 [2024-07-26 12:14:54.678413] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:06.764 [2024-07-26 12:14:54.678423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:06.764 [2024-07-26 12:14:54.678435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:06.764 [2024-07-26 12:14:54.678445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:06.764 [2024-07-26 12:14:54.678459] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:06.764 [2024-07-26 12:14:54.678469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:06.764 [2024-07-26 12:14:54.678481] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:06.764 [2024-07-26 12:14:54.678490] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:06.764 [2024-07-26 12:14:54.678505] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:06.764 [2024-07-26 12:14:54.678520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:06.764 [2024-07-26 12:14:54.678534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:06.764 [2024-07-26 12:14:54.678544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:06.764 [2024-07-26 12:14:54.678557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:06.764 [2024-07-26 12:14:54.678567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:06.764 [2024-07-26 12:14:54.678580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:06.764 [2024-07-26 12:14:54.678591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:06.764 [2024-07-26 12:14:54.678604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:06.764 [2024-07-26 12:14:54.678615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:06.764 [2024-07-26 12:14:54.678628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:06.764 [2024-07-26 12:14:54.678638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:06.764 [2024-07-26 12:14:54.678653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:06.764 [2024-07-26 12:14:54.678664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:06.764 [2024-07-26 12:14:54.678676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:06.764 [2024-07-26 12:14:54.678686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:06.764 [2024-07-26 12:14:54.678699] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:06.764 [2024-07-26 12:14:54.678710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:06.764 [2024-07-26 12:14:54.678723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:06.764 [2024-07-26 12:14:54.678734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:06.764 [2024-07-26 12:14:54.678747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:06.764 [2024-07-26 12:14:54.678757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:06.764 [2024-07-26 12:14:54.678771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.764 [2024-07-26 12:14:54.678781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:06.764 [2024-07-26 12:14:54.678795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.919 ms 00:21:06.764 [2024-07-26 12:14:54.678805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.764 [2024-07-26 12:14:54.678849] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:06.764 [2024-07-26 12:14:54.678861] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:10.044 [2024-07-26 12:14:57.920395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.044 [2024-07-26 12:14:57.920463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:10.044 [2024-07-26 12:14:57.920483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3246.799 ms 00:21:10.044 [2024-07-26 12:14:57.920494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.044 [2024-07-26 12:14:57.965313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.044 [2024-07-26 12:14:57.965370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:10.044 [2024-07-26 12:14:57.965389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.584 ms 00:21:10.044 [2024-07-26 12:14:57.965400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.044 [2024-07-26 12:14:57.965573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.044 [2024-07-26 12:14:57.965586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:10.044 [2024-07-26 12:14:57.965608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:10.044 [2024-07-26 12:14:57.965618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.044 [2024-07-26 12:14:58.015353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.044 [2024-07-26 12:14:58.015400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:10.044 [2024-07-26 12:14:58.015418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.757 ms 00:21:10.044 [2024-07-26 12:14:58.015428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.044 [2024-07-26 12:14:58.015489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.044 [2024-07-26 12:14:58.015499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:10.044 [2024-07-26 12:14:58.015518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:10.044 [2024-07-26 12:14:58.015528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.044 [2024-07-26 12:14:58.016008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.044 [2024-07-26 12:14:58.016025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:10.044 [2024-07-26 12:14:58.016039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:21:10.044 [2024-07-26 12:14:58.016049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.044 [2024-07-26 12:14:58.016179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.044 [2024-07-26 12:14:58.016195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:10.044 [2024-07-26 12:14:58.016209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:21:10.044 [2024-07-26 12:14:58.016218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.302 [2024-07-26 12:14:58.037249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.302 [2024-07-26 12:14:58.037296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:10.302 [2024-07-26 12:14:58.037313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.037 ms 00:21:10.302 [2024-07-26 12:14:58.037323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.302 [2024-07-26 12:14:58.050633] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:10.302 [2024-07-26 12:14:58.053809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.303 [2024-07-26 12:14:58.053840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:10.303 [2024-07-26 12:14:58.053854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.379 ms 00:21:10.303 [2024-07-26 12:14:58.053867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.303 [2024-07-26 12:14:58.152441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.303 [2024-07-26 12:14:58.152503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:10.303 [2024-07-26 12:14:58.152520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.688 ms 00:21:10.303 [2024-07-26 12:14:58.152533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.303 [2024-07-26 12:14:58.152721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.303 [2024-07-26 12:14:58.152737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:10.303 [2024-07-26 12:14:58.152748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:21:10.303 [2024-07-26 12:14:58.152764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.303 [2024-07-26 12:14:58.189517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.303 [2024-07-26 12:14:58.189568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:10.303 [2024-07-26 12:14:58.189583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.760 ms 00:21:10.303 [2024-07-26 12:14:58.189599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.303 [2024-07-26 12:14:58.225941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.303 [2024-07-26 12:14:58.225990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:10.303 [2024-07-26 12:14:58.226005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.349 ms 00:21:10.303 [2024-07-26 12:14:58.226018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.303 [2024-07-26 12:14:58.226818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.303 [2024-07-26 12:14:58.226842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:10.303 [2024-07-26 12:14:58.226857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:21:10.303 [2024-07-26 12:14:58.226870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.562 [2024-07-26 12:14:58.335835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.562 [2024-07-26 12:14:58.335899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:10.562 [2024-07-26 12:14:58.335916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.084 ms 00:21:10.562 [2024-07-26 12:14:58.335933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.562 [2024-07-26 12:14:58.378148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.562 [2024-07-26 12:14:58.378206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:10.562 [2024-07-26 12:14:58.378222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.236 ms 00:21:10.562 [2024-07-26 12:14:58.378235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.562 [2024-07-26 12:14:58.419767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.562 [2024-07-26 12:14:58.419826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:10.562 [2024-07-26 12:14:58.419841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.552 ms 00:21:10.562 [2024-07-26 12:14:58.419854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.562 [2024-07-26 12:14:58.460412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.562 [2024-07-26 12:14:58.460457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:10.562 [2024-07-26 12:14:58.460472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.576 ms 00:21:10.562 [2024-07-26 12:14:58.460484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.562 [2024-07-26 12:14:58.460530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.562 [2024-07-26 12:14:58.460544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:10.562 [2024-07-26 12:14:58.460555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:10.562 [2024-07-26 12:14:58.460570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.562 [2024-07-26 12:14:58.460679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:10.562 [2024-07-26 12:14:58.460697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:10.562 [2024-07-26 12:14:58.460709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:10.562 [2024-07-26 12:14:58.460721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:10.562 [2024-07-26 12:14:58.461855] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3807.035 ms, result 0 00:21:10.562 { 00:21:10.562 "name": "ftl0", 00:21:10.562 "uuid": "84b0b1fd-0692-4636-ae1e-94f73c17c0ad" 00:21:10.562 } 00:21:10.562 12:14:58 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:10.562 12:14:58 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:10.820 12:14:58 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:10.820 12:14:58 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:11.078 [2024-07-26 12:14:58.904470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.078 [2024-07-26 12:14:58.904533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:11.078 [2024-07-26 12:14:58.904552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:11.078 [2024-07-26 12:14:58.904563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.078 [2024-07-26 12:14:58.904593] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:11.078 [2024-07-26 12:14:58.908503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.078 [2024-07-26 12:14:58.908538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:11.078 [2024-07-26 12:14:58.908551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.897 ms 00:21:11.078 [2024-07-26 12:14:58.908564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.078 [2024-07-26 12:14:58.908816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.078 [2024-07-26 12:14:58.908837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:11.078 [2024-07-26 12:14:58.908861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:21:11.078 [2024-07-26 12:14:58.908875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.078 [2024-07-26 12:14:58.911404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.078 [2024-07-26 12:14:58.911428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:11.078 [2024-07-26 12:14:58.911439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.517 ms 00:21:11.078 [2024-07-26 12:14:58.911452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.078 [2024-07-26 12:14:58.916537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.078 [2024-07-26 12:14:58.916575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:11.078 [2024-07-26 12:14:58.916587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.072 ms 00:21:11.078 [2024-07-26 12:14:58.916599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.078 [2024-07-26 12:14:58.955465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.078 [2024-07-26 12:14:58.955524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:11.078 [2024-07-26 12:14:58.955541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.836 ms 00:21:11.078 [2024-07-26 12:14:58.955554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.078 [2024-07-26 12:14:58.978746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.078 [2024-07-26 12:14:58.978813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:11.078 [2024-07-26 12:14:58.978829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.163 ms 00:21:11.078 [2024-07-26 12:14:58.978842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.078 [2024-07-26 12:14:58.979043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.078 [2024-07-26 12:14:58.979065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:11.078 [2024-07-26 12:14:58.979077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:21:11.078 [2024-07-26 12:14:58.979089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.078 [2024-07-26 12:14:59.018257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.078 [2024-07-26 12:14:59.018309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:11.078 [2024-07-26 12:14:59.018324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.207 ms 00:21:11.078 [2024-07-26 12:14:59.018337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.337 [2024-07-26 12:14:59.057122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.337 [2024-07-26 12:14:59.057175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:11.337 [2024-07-26 12:14:59.057190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.792 ms 00:21:11.337 [2024-07-26 12:14:59.057204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.337 [2024-07-26 12:14:59.095025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.337 [2024-07-26 12:14:59.095084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:11.337 [2024-07-26 12:14:59.095099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.832 ms 00:21:11.337 [2024-07-26 12:14:59.095111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.337 [2024-07-26 12:14:59.133060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.337 [2024-07-26 12:14:59.133125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:11.337 [2024-07-26 12:14:59.133140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.890 ms 00:21:11.337 [2024-07-26 12:14:59.133153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.337 [2024-07-26 12:14:59.133201] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:11.337 [2024-07-26 12:14:59.133222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:11.337 [2024-07-26 12:14:59.133667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.133998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:11.338 [2024-07-26 12:14:59.134485] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:11.338 [2024-07-26 12:14:59.134495] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84b0b1fd-0692-4636-ae1e-94f73c17c0ad 00:21:11.338 [2024-07-26 12:14:59.134508] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:11.338 [2024-07-26 12:14:59.134518] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:11.338 [2024-07-26 12:14:59.134532] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:11.338 [2024-07-26 12:14:59.134542] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:11.338 [2024-07-26 12:14:59.134554] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:11.338 [2024-07-26 12:14:59.134565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:11.338 [2024-07-26 12:14:59.134577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:11.338 [2024-07-26 12:14:59.134586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:11.338 [2024-07-26 12:14:59.134598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:11.338 [2024-07-26 12:14:59.134607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.338 [2024-07-26 12:14:59.134619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:11.338 [2024-07-26 12:14:59.134631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.410 ms 00:21:11.338 [2024-07-26 12:14:59.134645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.338 [2024-07-26 12:14:59.155067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.338 [2024-07-26 12:14:59.155117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:11.338 [2024-07-26 12:14:59.155146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.395 ms 00:21:11.338 [2024-07-26 12:14:59.155159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.338 [2024-07-26 12:14:59.155635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.338 [2024-07-26 12:14:59.155654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:11.338 [2024-07-26 12:14:59.155672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:21:11.338 [2024-07-26 12:14:59.155683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.338 [2024-07-26 12:14:59.217880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.338 [2024-07-26 12:14:59.217943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.338 [2024-07-26 12:14:59.217958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.338 [2024-07-26 12:14:59.217970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.338 [2024-07-26 12:14:59.218048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.339 [2024-07-26 12:14:59.218063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.339 [2024-07-26 12:14:59.218076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.339 [2024-07-26 12:14:59.218089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.339 [2024-07-26 12:14:59.218217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.339 [2024-07-26 12:14:59.218235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.339 [2024-07-26 12:14:59.218246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.339 [2024-07-26 12:14:59.218258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.339 [2024-07-26 12:14:59.218279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.339 [2024-07-26 12:14:59.218295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.339 [2024-07-26 12:14:59.218305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.339 [2024-07-26 12:14:59.218320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.597 [2024-07-26 12:14:59.337330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.597 [2024-07-26 12:14:59.337395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.597 [2024-07-26 12:14:59.337410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.597 [2024-07-26 12:14:59.337423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.597 [2024-07-26 12:14:59.440272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.597 [2024-07-26 12:14:59.440342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:11.597 [2024-07-26 12:14:59.440360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.597 [2024-07-26 12:14:59.440374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.597 [2024-07-26 12:14:59.440497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.597 [2024-07-26 12:14:59.440512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:11.597 [2024-07-26 12:14:59.440523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.597 [2024-07-26 12:14:59.440535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.597 [2024-07-26 12:14:59.440591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.598 [2024-07-26 12:14:59.440609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:11.598 [2024-07-26 12:14:59.440619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.598 [2024-07-26 12:14:59.440632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.598 [2024-07-26 12:14:59.440748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.598 [2024-07-26 12:14:59.440764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:11.598 [2024-07-26 12:14:59.440775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.598 [2024-07-26 12:14:59.440788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.598 [2024-07-26 12:14:59.440824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.598 [2024-07-26 12:14:59.440840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:11.598 [2024-07-26 12:14:59.440850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.598 [2024-07-26 12:14:59.440862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.598 [2024-07-26 12:14:59.440904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.598 [2024-07-26 12:14:59.440917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:11.598 [2024-07-26 12:14:59.440927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.598 [2024-07-26 12:14:59.440939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.598 [2024-07-26 12:14:59.440985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.598 [2024-07-26 12:14:59.441002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:11.598 [2024-07-26 12:14:59.441013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.598 [2024-07-26 12:14:59.441026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.598 [2024-07-26 12:14:59.441179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 537.525 ms, result 0 00:21:11.598 true 00:21:11.598 12:14:59 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 80233 00:21:11.598 12:14:59 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 80233 ']' 00:21:11.598 12:14:59 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 80233 00:21:11.598 12:14:59 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:21:11.598 12:14:59 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.598 12:14:59 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80233 00:21:11.598 killing process with pid 80233 00:21:11.598 12:14:59 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:11.598 12:14:59 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:11.598 12:14:59 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80233' 00:21:11.598 12:14:59 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 80233 00:21:11.598 12:14:59 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 80233 00:21:16.922 12:15:04 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:21.184 262144+0 records in 00:21:21.184 262144+0 records out 00:21:21.184 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.0076 s, 268 MB/s 00:21:21.184 12:15:08 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:22.562 12:15:10 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:22.562 [2024-07-26 12:15:10.437379] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:21:22.562 [2024-07-26 12:15:10.437491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80470 ] 00:21:22.831 [2024-07-26 12:15:10.596928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.091 [2024-07-26 12:15:10.856695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.349 [2024-07-26 12:15:11.252567] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:23.350 [2024-07-26 12:15:11.252633] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:23.682 [2024-07-26 12:15:11.414822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.682 [2024-07-26 12:15:11.414879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:23.682 [2024-07-26 12:15:11.414894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:23.682 [2024-07-26 12:15:11.414905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.682 [2024-07-26 12:15:11.414955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.682 [2024-07-26 12:15:11.414968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:23.682 [2024-07-26 12:15:11.414978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:23.682 [2024-07-26 12:15:11.414991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.682 [2024-07-26 12:15:11.415016] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:23.682 [2024-07-26 12:15:11.416103] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:23.682 [2024-07-26 12:15:11.416137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.682 [2024-07-26 12:15:11.416148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:23.682 [2024-07-26 12:15:11.416159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.132 ms 00:21:23.682 [2024-07-26 12:15:11.416168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.682 [2024-07-26 12:15:11.417579] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:23.682 [2024-07-26 12:15:11.437993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.682 [2024-07-26 12:15:11.438030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:23.682 [2024-07-26 12:15:11.438044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.447 ms 00:21:23.682 [2024-07-26 12:15:11.438055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.682 [2024-07-26 12:15:11.438136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.682 [2024-07-26 12:15:11.438152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:23.682 [2024-07-26 12:15:11.438163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:23.682 [2024-07-26 12:15:11.438172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.682 [2024-07-26 12:15:11.445028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.682 [2024-07-26 12:15:11.445056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:23.682 [2024-07-26 12:15:11.445068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.795 ms 00:21:23.682 [2024-07-26 12:15:11.445077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.682 [2024-07-26 12:15:11.445175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.682 [2024-07-26 12:15:11.445189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:23.682 [2024-07-26 12:15:11.445199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:21:23.682 [2024-07-26 12:15:11.445209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.682 [2024-07-26 12:15:11.445253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.682 [2024-07-26 12:15:11.445265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:23.682 [2024-07-26 12:15:11.445275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:23.682 [2024-07-26 12:15:11.445285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.682 [2024-07-26 12:15:11.445309] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:23.682 [2024-07-26 12:15:11.450740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.682 [2024-07-26 12:15:11.450788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:23.682 [2024-07-26 12:15:11.450800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.446 ms 00:21:23.682 [2024-07-26 12:15:11.450810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.682 [2024-07-26 12:15:11.450846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.682 [2024-07-26 12:15:11.450856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:23.682 [2024-07-26 12:15:11.450867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:23.682 [2024-07-26 12:15:11.450876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.682 [2024-07-26 12:15:11.450930] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:23.682 [2024-07-26 12:15:11.450954] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:23.682 [2024-07-26 12:15:11.450988] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:23.682 [2024-07-26 12:15:11.451007] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:23.682 [2024-07-26 12:15:11.451089] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:23.682 [2024-07-26 12:15:11.451101] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:23.682 [2024-07-26 12:15:11.451114] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:23.682 [2024-07-26 12:15:11.451127] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:23.682 [2024-07-26 12:15:11.451153] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:23.682 [2024-07-26 12:15:11.451164] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:23.682 [2024-07-26 12:15:11.451174] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:23.682 [2024-07-26 12:15:11.451183] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:23.682 [2024-07-26 12:15:11.451193] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:23.682 [2024-07-26 12:15:11.451203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.682 [2024-07-26 12:15:11.451216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:23.682 [2024-07-26 12:15:11.451226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:21:23.682 [2024-07-26 12:15:11.451235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.682 [2024-07-26 12:15:11.451306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.682 [2024-07-26 12:15:11.451316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:23.682 [2024-07-26 12:15:11.451327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:23.682 [2024-07-26 12:15:11.451336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.682 [2024-07-26 12:15:11.451417] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:23.682 [2024-07-26 12:15:11.451429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:23.682 [2024-07-26 12:15:11.451442] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.682 [2024-07-26 12:15:11.451452] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.682 [2024-07-26 12:15:11.451462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:23.682 [2024-07-26 12:15:11.451472] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:23.682 [2024-07-26 12:15:11.451481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:23.682 [2024-07-26 12:15:11.451491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:23.682 [2024-07-26 12:15:11.451501] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:23.682 [2024-07-26 12:15:11.451510] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.682 [2024-07-26 12:15:11.451519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:23.682 [2024-07-26 12:15:11.451529] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:23.682 [2024-07-26 12:15:11.451538] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.682 [2024-07-26 12:15:11.451547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:23.683 [2024-07-26 12:15:11.451556] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:23.683 [2024-07-26 12:15:11.451565] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.683 [2024-07-26 12:15:11.451574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:23.683 [2024-07-26 12:15:11.451583] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:23.683 [2024-07-26 12:15:11.451592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.683 [2024-07-26 12:15:11.451601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:23.683 [2024-07-26 12:15:11.451621] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:23.683 [2024-07-26 12:15:11.451630] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.683 [2024-07-26 12:15:11.451639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:23.683 [2024-07-26 12:15:11.451648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:23.683 [2024-07-26 12:15:11.451657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.683 [2024-07-26 12:15:11.451666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:23.683 [2024-07-26 12:15:11.451675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:23.683 [2024-07-26 12:15:11.451683] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.683 [2024-07-26 12:15:11.451693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:23.683 [2024-07-26 12:15:11.451702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:23.683 [2024-07-26 12:15:11.451711] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.683 [2024-07-26 12:15:11.451719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:23.683 [2024-07-26 12:15:11.451728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:23.683 [2024-07-26 12:15:11.451737] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.683 [2024-07-26 12:15:11.451746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:23.683 [2024-07-26 12:15:11.451755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:23.683 [2024-07-26 12:15:11.451764] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.683 [2024-07-26 12:15:11.451773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:23.683 [2024-07-26 12:15:11.451782] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:23.683 [2024-07-26 12:15:11.451791] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.683 [2024-07-26 12:15:11.451800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:23.683 [2024-07-26 12:15:11.451808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:23.683 [2024-07-26 12:15:11.451819] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.683 [2024-07-26 12:15:11.451828] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:23.683 [2024-07-26 12:15:11.451838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:23.683 [2024-07-26 12:15:11.451847] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.683 [2024-07-26 12:15:11.451856] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.683 [2024-07-26 12:15:11.451866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:23.683 [2024-07-26 12:15:11.451875] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:23.683 [2024-07-26 12:15:11.451884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:23.683 [2024-07-26 12:15:11.451893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:23.683 [2024-07-26 12:15:11.451902] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:23.683 [2024-07-26 12:15:11.451911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:23.683 [2024-07-26 12:15:11.451921] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:23.683 [2024-07-26 12:15:11.451933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.683 [2024-07-26 12:15:11.451944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:23.683 [2024-07-26 12:15:11.451955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:23.683 [2024-07-26 12:15:11.451965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:23.683 [2024-07-26 12:15:11.451975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:23.683 [2024-07-26 12:15:11.451986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:23.683 [2024-07-26 12:15:11.451996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:23.683 [2024-07-26 12:15:11.452006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:23.683 [2024-07-26 12:15:11.452016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:23.683 [2024-07-26 12:15:11.452026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:23.683 [2024-07-26 12:15:11.452036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:23.683 [2024-07-26 12:15:11.452046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:23.683 [2024-07-26 12:15:11.452057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:23.683 [2024-07-26 12:15:11.452067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:23.683 [2024-07-26 12:15:11.452077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:23.683 [2024-07-26 12:15:11.452087] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:23.683 [2024-07-26 12:15:11.452098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.683 [2024-07-26 12:15:11.452112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:23.683 [2024-07-26 12:15:11.452133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:23.683 [2024-07-26 12:15:11.452143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:23.683 [2024-07-26 12:15:11.452154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:23.683 [2024-07-26 12:15:11.452165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.683 [2024-07-26 12:15:11.452176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:23.683 [2024-07-26 12:15:11.452186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:21:23.683 [2024-07-26 12:15:11.452195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.683 [2024-07-26 12:15:11.513511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.683 [2024-07-26 12:15:11.513561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:23.683 [2024-07-26 12:15:11.513577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.367 ms 00:21:23.683 [2024-07-26 12:15:11.513588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.683 [2024-07-26 12:15:11.513695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.683 [2024-07-26 12:15:11.513706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:23.683 [2024-07-26 12:15:11.513717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:23.683 [2024-07-26 12:15:11.513727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.683 [2024-07-26 12:15:11.563868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.683 [2024-07-26 12:15:11.563910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:23.683 [2024-07-26 12:15:11.563925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.133 ms 00:21:23.683 [2024-07-26 12:15:11.563934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.683 [2024-07-26 12:15:11.563987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.683 [2024-07-26 12:15:11.563998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:23.683 [2024-07-26 12:15:11.564009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:23.683 [2024-07-26 12:15:11.564022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.683 [2024-07-26 12:15:11.564503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.683 [2024-07-26 12:15:11.564522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:23.683 [2024-07-26 12:15:11.564533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:21:23.683 [2024-07-26 12:15:11.564542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.683 [2024-07-26 12:15:11.564661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.683 [2024-07-26 12:15:11.564678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:23.683 [2024-07-26 12:15:11.564688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:21:23.683 [2024-07-26 12:15:11.564698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.683 [2024-07-26 12:15:11.585637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.683 [2024-07-26 12:15:11.585676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:23.683 [2024-07-26 12:15:11.585690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.947 ms 00:21:23.683 [2024-07-26 12:15:11.585704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.683 [2024-07-26 12:15:11.606542] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:23.683 [2024-07-26 12:15:11.606593] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:23.683 [2024-07-26 12:15:11.606609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.683 [2024-07-26 12:15:11.606620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:23.683 [2024-07-26 12:15:11.606633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.809 ms 00:21:23.683 [2024-07-26 12:15:11.606643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.943 [2024-07-26 12:15:11.638211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.943 [2024-07-26 12:15:11.638266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:23.943 [2024-07-26 12:15:11.638291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.562 ms 00:21:23.943 [2024-07-26 12:15:11.638302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.943 [2024-07-26 12:15:11.659284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.943 [2024-07-26 12:15:11.659333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:23.943 [2024-07-26 12:15:11.659347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.951 ms 00:21:23.943 [2024-07-26 12:15:11.659357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.943 [2024-07-26 12:15:11.679209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.943 [2024-07-26 12:15:11.679249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:23.943 [2024-07-26 12:15:11.679263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.822 ms 00:21:23.943 [2024-07-26 12:15:11.679273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.943 [2024-07-26 12:15:11.680136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.943 [2024-07-26 12:15:11.680161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:23.943 [2024-07-26 12:15:11.680174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:21:23.943 [2024-07-26 12:15:11.680183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.943 [2024-07-26 12:15:11.766830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.943 [2024-07-26 12:15:11.766897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:23.943 [2024-07-26 12:15:11.766913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.765 ms 00:21:23.944 [2024-07-26 12:15:11.766924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.944 [2024-07-26 12:15:11.781180] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:23.944 [2024-07-26 12:15:11.784502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.944 [2024-07-26 12:15:11.784534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:23.944 [2024-07-26 12:15:11.784549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.517 ms 00:21:23.944 [2024-07-26 12:15:11.784559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.944 [2024-07-26 12:15:11.784674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.944 [2024-07-26 12:15:11.784687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:23.944 [2024-07-26 12:15:11.784699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:23.944 [2024-07-26 12:15:11.784709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.944 [2024-07-26 12:15:11.784784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.944 [2024-07-26 12:15:11.784799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:23.944 [2024-07-26 12:15:11.784810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:23.944 [2024-07-26 12:15:11.784819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.944 [2024-07-26 12:15:11.784839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.944 [2024-07-26 12:15:11.784849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:23.944 [2024-07-26 12:15:11.784860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:23.944 [2024-07-26 12:15:11.784870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.944 [2024-07-26 12:15:11.784901] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:23.944 [2024-07-26 12:15:11.784913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.944 [2024-07-26 12:15:11.784923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:23.944 [2024-07-26 12:15:11.784936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:23.944 [2024-07-26 12:15:11.784945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.944 [2024-07-26 12:15:11.826002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.944 [2024-07-26 12:15:11.826054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:23.944 [2024-07-26 12:15:11.826070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.103 ms 00:21:23.944 [2024-07-26 12:15:11.826080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.944 [2024-07-26 12:15:11.826193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.944 [2024-07-26 12:15:11.826212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:23.944 [2024-07-26 12:15:11.826224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:23.944 [2024-07-26 12:15:11.826234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.944 [2024-07-26 12:15:11.827412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 412.796 ms, result 0 00:21:57.394  Copying: 29/1024 [MB] (29 MBps) Copying: 59/1024 [MB] (29 MBps) Copying: 87/1024 [MB] (28 MBps) Copying: 115/1024 [MB] (27 MBps) Copying: 144/1024 [MB] (28 MBps) Copying: 173/1024 [MB] (29 MBps) Copying: 204/1024 [MB] (30 MBps) Copying: 236/1024 [MB] (32 MBps) Copying: 266/1024 [MB] (30 MBps) Copying: 297/1024 [MB] (30 MBps) Copying: 330/1024 [MB] (33 MBps) Copying: 367/1024 [MB] (36 MBps) Copying: 404/1024 [MB] (37 MBps) Copying: 438/1024 [MB] (34 MBps) Copying: 468/1024 [MB] (29 MBps) Copying: 499/1024 [MB] (31 MBps) Copying: 530/1024 [MB] (30 MBps) Copying: 562/1024 [MB] (31 MBps) Copying: 595/1024 [MB] (32 MBps) Copying: 625/1024 [MB] (30 MBps) Copying: 657/1024 [MB] (31 MBps) Copying: 687/1024 [MB] (30 MBps) Copying: 716/1024 [MB] (28 MBps) Copying: 744/1024 [MB] (27 MBps) Copying: 774/1024 [MB] (29 MBps) Copying: 804/1024 [MB] (30 MBps) Copying: 834/1024 [MB] (30 MBps) Copying: 864/1024 [MB] (29 MBps) Copying: 893/1024 [MB] (29 MBps) Copying: 923/1024 [MB] (29 MBps) Copying: 951/1024 [MB] (28 MBps) Copying: 980/1024 [MB] (28 MBps) Copying: 1008/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-26 12:15:45.314558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.394 [2024-07-26 12:15:45.314617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:57.394 [2024-07-26 12:15:45.314635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:57.394 [2024-07-26 12:15:45.314646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.394 [2024-07-26 12:15:45.314669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:57.394 [2024-07-26 12:15:45.318904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.394 [2024-07-26 12:15:45.318945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:57.394 [2024-07-26 12:15:45.318958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.224 ms 00:21:57.394 [2024-07-26 12:15:45.318968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.394 [2024-07-26 12:15:45.320406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.394 [2024-07-26 12:15:45.320457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:57.394 [2024-07-26 12:15:45.320470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:21:57.394 [2024-07-26 12:15:45.320481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.394 [2024-07-26 12:15:45.338385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.394 [2024-07-26 12:15:45.338428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:57.394 [2024-07-26 12:15:45.338442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.916 ms 00:21:57.394 [2024-07-26 12:15:45.338453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.394 [2024-07-26 12:15:45.343535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.394 [2024-07-26 12:15:45.343579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:57.394 [2024-07-26 12:15:45.343591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.055 ms 00:21:57.394 [2024-07-26 12:15:45.343601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.655 [2024-07-26 12:15:45.383473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.655 [2024-07-26 12:15:45.383525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:57.655 [2024-07-26 12:15:45.383540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.878 ms 00:21:57.655 [2024-07-26 12:15:45.383551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.655 [2024-07-26 12:15:45.406176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.655 [2024-07-26 12:15:45.406218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:57.655 [2024-07-26 12:15:45.406233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.619 ms 00:21:57.655 [2024-07-26 12:15:45.406243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.655 [2024-07-26 12:15:45.406379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.655 [2024-07-26 12:15:45.406393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:57.655 [2024-07-26 12:15:45.406404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:21:57.655 [2024-07-26 12:15:45.406418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.655 [2024-07-26 12:15:45.443277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.655 [2024-07-26 12:15:45.443328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:57.655 [2024-07-26 12:15:45.443344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.901 ms 00:21:57.655 [2024-07-26 12:15:45.443354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.655 [2024-07-26 12:15:45.479566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.655 [2024-07-26 12:15:45.479639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:57.655 [2024-07-26 12:15:45.479655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.223 ms 00:21:57.655 [2024-07-26 12:15:45.479665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.655 [2024-07-26 12:15:45.515657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.655 [2024-07-26 12:15:45.515710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:57.655 [2024-07-26 12:15:45.515725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.006 ms 00:21:57.655 [2024-07-26 12:15:45.515749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.655 [2024-07-26 12:15:45.551373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.655 [2024-07-26 12:15:45.551420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:57.655 [2024-07-26 12:15:45.551434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.601 ms 00:21:57.655 [2024-07-26 12:15:45.551445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.655 [2024-07-26 12:15:45.551483] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:57.655 [2024-07-26 12:15:45.551500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:57.655 [2024-07-26 12:15:45.551943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.551953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.551964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.551974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.551985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.551996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:57.656 [2024-07-26 12:15:45.552576] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:57.656 [2024-07-26 12:15:45.552586] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84b0b1fd-0692-4636-ae1e-94f73c17c0ad 00:21:57.656 [2024-07-26 12:15:45.552597] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:57.656 [2024-07-26 12:15:45.552611] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:57.656 [2024-07-26 12:15:45.552620] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:57.656 [2024-07-26 12:15:45.552630] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:57.656 [2024-07-26 12:15:45.552639] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:57.656 [2024-07-26 12:15:45.552649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:57.656 [2024-07-26 12:15:45.552659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:57.656 [2024-07-26 12:15:45.552668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:57.656 [2024-07-26 12:15:45.552677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:57.656 [2024-07-26 12:15:45.552686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.656 [2024-07-26 12:15:45.552696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:57.656 [2024-07-26 12:15:45.552706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.206 ms 00:21:57.656 [2024-07-26 12:15:45.552719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.656 [2024-07-26 12:15:45.571402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.656 [2024-07-26 12:15:45.571445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:57.656 [2024-07-26 12:15:45.571458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.675 ms 00:21:57.656 [2024-07-26 12:15:45.571480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.656 [2024-07-26 12:15:45.571941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.656 [2024-07-26 12:15:45.571961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:57.656 [2024-07-26 12:15:45.571972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:21:57.656 [2024-07-26 12:15:45.571982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.656 [2024-07-26 12:15:45.613592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.656 [2024-07-26 12:15:45.613645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:57.656 [2024-07-26 12:15:45.613658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.656 [2024-07-26 12:15:45.613669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.656 [2024-07-26 12:15:45.613724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.656 [2024-07-26 12:15:45.613735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:57.656 [2024-07-26 12:15:45.613745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.656 [2024-07-26 12:15:45.613754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.656 [2024-07-26 12:15:45.613820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.656 [2024-07-26 12:15:45.613833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:57.656 [2024-07-26 12:15:45.613843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.656 [2024-07-26 12:15:45.613853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.656 [2024-07-26 12:15:45.613869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.657 [2024-07-26 12:15:45.613880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:57.657 [2024-07-26 12:15:45.613889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.657 [2024-07-26 12:15:45.613899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.916 [2024-07-26 12:15:45.727347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.916 [2024-07-26 12:15:45.727407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:57.916 [2024-07-26 12:15:45.727422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.916 [2024-07-26 12:15:45.727432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.916 [2024-07-26 12:15:45.822427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.916 [2024-07-26 12:15:45.822485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:57.916 [2024-07-26 12:15:45.822499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.916 [2024-07-26 12:15:45.822509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.916 [2024-07-26 12:15:45.822606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.916 [2024-07-26 12:15:45.822621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:57.916 [2024-07-26 12:15:45.822632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.916 [2024-07-26 12:15:45.822641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.916 [2024-07-26 12:15:45.822679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.916 [2024-07-26 12:15:45.822690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:57.916 [2024-07-26 12:15:45.822700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.916 [2024-07-26 12:15:45.822709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.916 [2024-07-26 12:15:45.822827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.916 [2024-07-26 12:15:45.822840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:57.916 [2024-07-26 12:15:45.822854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.916 [2024-07-26 12:15:45.822864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.916 [2024-07-26 12:15:45.822898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.916 [2024-07-26 12:15:45.822910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:57.916 [2024-07-26 12:15:45.822920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.916 [2024-07-26 12:15:45.822929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.916 [2024-07-26 12:15:45.822966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.916 [2024-07-26 12:15:45.822977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:57.916 [2024-07-26 12:15:45.822990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.916 [2024-07-26 12:15:45.823000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.916 [2024-07-26 12:15:45.823042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.916 [2024-07-26 12:15:45.823053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:57.916 [2024-07-26 12:15:45.823063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.916 [2024-07-26 12:15:45.823073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.916 [2024-07-26 12:15:45.823207] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 509.440 ms, result 0 00:21:59.817 00:21:59.817 00:21:59.817 12:15:47 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:59.817 [2024-07-26 12:15:47.692416] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:21:59.817 [2024-07-26 12:15:47.692557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80857 ] 00:22:00.141 [2024-07-26 12:15:47.875085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.142 [2024-07-26 12:15:48.114253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.708 [2024-07-26 12:15:48.520460] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:00.708 [2024-07-26 12:15:48.520528] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:00.708 [2024-07-26 12:15:48.681482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.708 [2024-07-26 12:15:48.681537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:00.708 [2024-07-26 12:15:48.681553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:00.708 [2024-07-26 12:15:48.681563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.708 [2024-07-26 12:15:48.681626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.708 [2024-07-26 12:15:48.681638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:00.708 [2024-07-26 12:15:48.681649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:00.708 [2024-07-26 12:15:48.681680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.708 [2024-07-26 12:15:48.681707] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:00.708 [2024-07-26 12:15:48.682888] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:00.708 [2024-07-26 12:15:48.682920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.708 [2024-07-26 12:15:48.682931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:00.708 [2024-07-26 12:15:48.682942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.223 ms 00:22:00.708 [2024-07-26 12:15:48.682951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.708 [2024-07-26 12:15:48.684443] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:00.968 [2024-07-26 12:15:48.705533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.968 [2024-07-26 12:15:48.705578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:00.968 [2024-07-26 12:15:48.705594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.124 ms 00:22:00.968 [2024-07-26 12:15:48.705605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.968 [2024-07-26 12:15:48.705710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.968 [2024-07-26 12:15:48.705727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:00.968 [2024-07-26 12:15:48.705739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:00.968 [2024-07-26 12:15:48.705749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.968 [2024-07-26 12:15:48.712927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.968 [2024-07-26 12:15:48.712956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:00.968 [2024-07-26 12:15:48.712968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.112 ms 00:22:00.968 [2024-07-26 12:15:48.712979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.968 [2024-07-26 12:15:48.713062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.968 [2024-07-26 12:15:48.713075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:00.968 [2024-07-26 12:15:48.713085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:00.968 [2024-07-26 12:15:48.713095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.968 [2024-07-26 12:15:48.713150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.968 [2024-07-26 12:15:48.713163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:00.968 [2024-07-26 12:15:48.713174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:00.968 [2024-07-26 12:15:48.713183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.968 [2024-07-26 12:15:48.713209] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:00.968 [2024-07-26 12:15:48.719035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.968 [2024-07-26 12:15:48.719068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:00.968 [2024-07-26 12:15:48.719080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.841 ms 00:22:00.968 [2024-07-26 12:15:48.719090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.968 [2024-07-26 12:15:48.719134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.968 [2024-07-26 12:15:48.719146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:00.968 [2024-07-26 12:15:48.719156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:00.968 [2024-07-26 12:15:48.719165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.968 [2024-07-26 12:15:48.719220] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:00.968 [2024-07-26 12:15:48.719245] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:00.968 [2024-07-26 12:15:48.719280] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:00.968 [2024-07-26 12:15:48.719300] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:00.968 [2024-07-26 12:15:48.719382] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:00.968 [2024-07-26 12:15:48.719395] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:00.968 [2024-07-26 12:15:48.719408] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:00.968 [2024-07-26 12:15:48.719420] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:00.968 [2024-07-26 12:15:48.719447] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:00.968 [2024-07-26 12:15:48.719459] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:00.968 [2024-07-26 12:15:48.719469] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:00.968 [2024-07-26 12:15:48.719479] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:00.968 [2024-07-26 12:15:48.719489] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:00.968 [2024-07-26 12:15:48.719500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.968 [2024-07-26 12:15:48.719514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:00.968 [2024-07-26 12:15:48.719524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:22:00.968 [2024-07-26 12:15:48.719535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.968 [2024-07-26 12:15:48.719609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.968 [2024-07-26 12:15:48.719620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:00.968 [2024-07-26 12:15:48.719630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:00.968 [2024-07-26 12:15:48.719640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.968 [2024-07-26 12:15:48.719728] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:00.968 [2024-07-26 12:15:48.719740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:00.968 [2024-07-26 12:15:48.719754] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:00.968 [2024-07-26 12:15:48.719765] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:00.968 [2024-07-26 12:15:48.719777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:00.968 [2024-07-26 12:15:48.719787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:00.968 [2024-07-26 12:15:48.719797] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:00.968 [2024-07-26 12:15:48.719806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:00.968 [2024-07-26 12:15:48.719816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:00.968 [2024-07-26 12:15:48.719826] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:00.969 [2024-07-26 12:15:48.719835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:00.969 [2024-07-26 12:15:48.719845] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:00.969 [2024-07-26 12:15:48.719854] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:00.969 [2024-07-26 12:15:48.719864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:00.969 [2024-07-26 12:15:48.719874] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:00.969 [2024-07-26 12:15:48.719883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:00.969 [2024-07-26 12:15:48.719893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:00.969 [2024-07-26 12:15:48.719902] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:00.969 [2024-07-26 12:15:48.719912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:00.969 [2024-07-26 12:15:48.719921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:00.969 [2024-07-26 12:15:48.719942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:00.969 [2024-07-26 12:15:48.719952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:00.969 [2024-07-26 12:15:48.719961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:00.969 [2024-07-26 12:15:48.719970] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:00.969 [2024-07-26 12:15:48.719980] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:00.969 [2024-07-26 12:15:48.719989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:00.969 [2024-07-26 12:15:48.719999] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:00.969 [2024-07-26 12:15:48.720009] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:00.969 [2024-07-26 12:15:48.720018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:00.969 [2024-07-26 12:15:48.720027] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:00.969 [2024-07-26 12:15:48.720038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:00.969 [2024-07-26 12:15:48.720047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:00.969 [2024-07-26 12:15:48.720056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:00.969 [2024-07-26 12:15:48.720065] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:00.969 [2024-07-26 12:15:48.720075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:00.969 [2024-07-26 12:15:48.720084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:00.969 [2024-07-26 12:15:48.720094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:00.969 [2024-07-26 12:15:48.720103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:00.969 [2024-07-26 12:15:48.720113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:00.969 [2024-07-26 12:15:48.720122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:00.969 [2024-07-26 12:15:48.720131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:00.969 [2024-07-26 12:15:48.720153] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:00.969 [2024-07-26 12:15:48.720162] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:00.969 [2024-07-26 12:15:48.720172] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:00.969 [2024-07-26 12:15:48.720182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:00.969 [2024-07-26 12:15:48.720192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:00.969 [2024-07-26 12:15:48.720202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:00.969 [2024-07-26 12:15:48.720212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:00.969 [2024-07-26 12:15:48.720222] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:00.969 [2024-07-26 12:15:48.720231] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:00.969 [2024-07-26 12:15:48.720241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:00.969 [2024-07-26 12:15:48.720250] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:00.969 [2024-07-26 12:15:48.720260] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:00.969 [2024-07-26 12:15:48.720270] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:00.969 [2024-07-26 12:15:48.720282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:00.969 [2024-07-26 12:15:48.720294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:00.969 [2024-07-26 12:15:48.720305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:00.969 [2024-07-26 12:15:48.720315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:00.969 [2024-07-26 12:15:48.720326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:00.969 [2024-07-26 12:15:48.720336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:00.969 [2024-07-26 12:15:48.720347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:00.969 [2024-07-26 12:15:48.720358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:00.969 [2024-07-26 12:15:48.720368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:00.969 [2024-07-26 12:15:48.720378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:00.969 [2024-07-26 12:15:48.720389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:00.969 [2024-07-26 12:15:48.720400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:00.969 [2024-07-26 12:15:48.720410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:00.969 [2024-07-26 12:15:48.720421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:00.969 [2024-07-26 12:15:48.720432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:00.969 [2024-07-26 12:15:48.720442] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:00.969 [2024-07-26 12:15:48.720453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:00.969 [2024-07-26 12:15:48.720468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:00.969 [2024-07-26 12:15:48.720479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:00.969 [2024-07-26 12:15:48.720490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:00.969 [2024-07-26 12:15:48.720501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:00.969 [2024-07-26 12:15:48.720512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.969 [2024-07-26 12:15:48.720523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:00.969 [2024-07-26 12:15:48.720533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:22:00.969 [2024-07-26 12:15:48.720543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.969 [2024-07-26 12:15:48.775289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.969 [2024-07-26 12:15:48.775340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.969 [2024-07-26 12:15:48.775356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.784 ms 00:22:00.969 [2024-07-26 12:15:48.775367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.969 [2024-07-26 12:15:48.775464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.969 [2024-07-26 12:15:48.775475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:00.969 [2024-07-26 12:15:48.775486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:00.969 [2024-07-26 12:15:48.775496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.969 [2024-07-26 12:15:48.828385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.969 [2024-07-26 12:15:48.828432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.969 [2024-07-26 12:15:48.828447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.880 ms 00:22:00.969 [2024-07-26 12:15:48.828457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.969 [2024-07-26 12:15:48.828510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.969 [2024-07-26 12:15:48.828521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.969 [2024-07-26 12:15:48.828532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:00.969 [2024-07-26 12:15:48.828546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.969 [2024-07-26 12:15:48.829026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.969 [2024-07-26 12:15:48.829039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.969 [2024-07-26 12:15:48.829050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:22:00.969 [2024-07-26 12:15:48.829060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.969 [2024-07-26 12:15:48.829194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.969 [2024-07-26 12:15:48.829207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.969 [2024-07-26 12:15:48.829218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:22:00.969 [2024-07-26 12:15:48.829227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.969 [2024-07-26 12:15:48.850052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.969 [2024-07-26 12:15:48.850093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.969 [2024-07-26 12:15:48.850107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.833 ms 00:22:00.969 [2024-07-26 12:15:48.850135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.969 [2024-07-26 12:15:48.870374] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:00.970 [2024-07-26 12:15:48.870419] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:00.970 [2024-07-26 12:15:48.870436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.970 [2024-07-26 12:15:48.870447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:00.970 [2024-07-26 12:15:48.870460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.200 ms 00:22:00.970 [2024-07-26 12:15:48.870469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.970 [2024-07-26 12:15:48.902298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.970 [2024-07-26 12:15:48.902376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:00.970 [2024-07-26 12:15:48.902393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.827 ms 00:22:00.970 [2024-07-26 12:15:48.902404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.970 [2024-07-26 12:15:48.923512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.970 [2024-07-26 12:15:48.923565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:00.970 [2024-07-26 12:15:48.923580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.077 ms 00:22:00.970 [2024-07-26 12:15:48.923590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.970 [2024-07-26 12:15:48.943698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.970 [2024-07-26 12:15:48.943743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:00.970 [2024-07-26 12:15:48.943757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.091 ms 00:22:00.970 [2024-07-26 12:15:48.943767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.970 [2024-07-26 12:15:48.944646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.970 [2024-07-26 12:15:48.944670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:00.970 [2024-07-26 12:15:48.944682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:22:00.970 [2024-07-26 12:15:48.944692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.228 [2024-07-26 12:15:49.039252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.228 [2024-07-26 12:15:49.039322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:01.228 [2024-07-26 12:15:49.039340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.687 ms 00:22:01.228 [2024-07-26 12:15:49.039357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.228 [2024-07-26 12:15:49.052949] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:01.228 [2024-07-26 12:15:49.056302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.228 [2024-07-26 12:15:49.056338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:01.228 [2024-07-26 12:15:49.056352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.899 ms 00:22:01.228 [2024-07-26 12:15:49.056362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.228 [2024-07-26 12:15:49.056488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.228 [2024-07-26 12:15:49.056501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:01.228 [2024-07-26 12:15:49.056513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:01.228 [2024-07-26 12:15:49.056523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.228 [2024-07-26 12:15:49.056601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.228 [2024-07-26 12:15:49.056613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:01.228 [2024-07-26 12:15:49.056623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:01.228 [2024-07-26 12:15:49.056633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.228 [2024-07-26 12:15:49.056652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.228 [2024-07-26 12:15:49.056663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:01.228 [2024-07-26 12:15:49.056673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:01.228 [2024-07-26 12:15:49.056683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.228 [2024-07-26 12:15:49.056714] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:01.228 [2024-07-26 12:15:49.056726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.228 [2024-07-26 12:15:49.056740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:01.228 [2024-07-26 12:15:49.056751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:01.228 [2024-07-26 12:15:49.056761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.228 [2024-07-26 12:15:49.094348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.228 [2024-07-26 12:15:49.094393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:01.228 [2024-07-26 12:15:49.094409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.628 ms 00:22:01.228 [2024-07-26 12:15:49.094426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.228 [2024-07-26 12:15:49.094505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.228 [2024-07-26 12:15:49.094518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:01.228 [2024-07-26 12:15:49.094529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:01.228 [2024-07-26 12:15:49.094539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.228 [2024-07-26 12:15:49.095702] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 414.445 ms, result 0 00:22:35.056  Copying: 32/1024 [MB] (32 MBps) Copying: 65/1024 [MB] (32 MBps) Copying: 99/1024 [MB] (34 MBps) Copying: 131/1024 [MB] (31 MBps) Copying: 162/1024 [MB] (31 MBps) Copying: 194/1024 [MB] (31 MBps) Copying: 225/1024 [MB] (31 MBps) Copying: 259/1024 [MB] (34 MBps) Copying: 291/1024 [MB] (31 MBps) Copying: 322/1024 [MB] (30 MBps) Copying: 352/1024 [MB] (30 MBps) Copying: 383/1024 [MB] (30 MBps) Copying: 413/1024 [MB] (30 MBps) Copying: 444/1024 [MB] (30 MBps) Copying: 474/1024 [MB] (29 MBps) Copying: 504/1024 [MB] (29 MBps) Copying: 534/1024 [MB] (30 MBps) Copying: 564/1024 [MB] (29 MBps) Copying: 593/1024 [MB] (29 MBps) Copying: 624/1024 [MB] (30 MBps) Copying: 654/1024 [MB] (30 MBps) Copying: 685/1024 [MB] (31 MBps) Copying: 715/1024 [MB] (29 MBps) Copying: 745/1024 [MB] (29 MBps) Copying: 775/1024 [MB] (30 MBps) Copying: 805/1024 [MB] (29 MBps) Copying: 834/1024 [MB] (28 MBps) Copying: 863/1024 [MB] (29 MBps) Copying: 892/1024 [MB] (29 MBps) Copying: 922/1024 [MB] (29 MBps) Copying: 953/1024 [MB] (30 MBps) Copying: 983/1024 [MB] (30 MBps) Copying: 1013/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-26 12:16:22.974378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.056 [2024-07-26 12:16:22.974452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:35.056 [2024-07-26 12:16:22.974469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:35.056 [2024-07-26 12:16:22.974480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.056 [2024-07-26 12:16:22.974503] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:35.056 [2024-07-26 12:16:22.978826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.056 [2024-07-26 12:16:22.978873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:35.056 [2024-07-26 12:16:22.978904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.310 ms 00:22:35.056 [2024-07-26 12:16:22.978924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.056 [2024-07-26 12:16:22.979158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.056 [2024-07-26 12:16:22.979187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:35.056 [2024-07-26 12:16:22.979199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:22:35.056 [2024-07-26 12:16:22.979210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.056 [2024-07-26 12:16:22.982387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.056 [2024-07-26 12:16:22.982418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:35.056 [2024-07-26 12:16:22.982431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.166 ms 00:22:35.056 [2024-07-26 12:16:22.982458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.056 [2024-07-26 12:16:22.988662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.056 [2024-07-26 12:16:22.988706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:35.056 [2024-07-26 12:16:22.988719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.184 ms 00:22:35.056 [2024-07-26 12:16:22.988729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.056 [2024-07-26 12:16:23.031731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.056 [2024-07-26 12:16:23.031780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:35.056 [2024-07-26 12:16:23.031796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.985 ms 00:22:35.056 [2024-07-26 12:16:23.031807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.317 [2024-07-26 12:16:23.053783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.317 [2024-07-26 12:16:23.053831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:35.317 [2024-07-26 12:16:23.053847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.964 ms 00:22:35.317 [2024-07-26 12:16:23.053857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.317 [2024-07-26 12:16:23.054002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.317 [2024-07-26 12:16:23.054017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:35.317 [2024-07-26 12:16:23.054032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:35.317 [2024-07-26 12:16:23.054042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.317 [2024-07-26 12:16:23.094893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.317 [2024-07-26 12:16:23.094951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:35.317 [2024-07-26 12:16:23.094966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.898 ms 00:22:35.317 [2024-07-26 12:16:23.094976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.317 [2024-07-26 12:16:23.134778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.317 [2024-07-26 12:16:23.134831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:35.317 [2024-07-26 12:16:23.134846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.816 ms 00:22:35.317 [2024-07-26 12:16:23.134856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.317 [2024-07-26 12:16:23.173343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.317 [2024-07-26 12:16:23.173402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:35.317 [2024-07-26 12:16:23.173430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.507 ms 00:22:35.317 [2024-07-26 12:16:23.173440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.317 [2024-07-26 12:16:23.210077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.317 [2024-07-26 12:16:23.210133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:35.317 [2024-07-26 12:16:23.210148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.601 ms 00:22:35.317 [2024-07-26 12:16:23.210158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.317 [2024-07-26 12:16:23.210198] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:35.317 [2024-07-26 12:16:23.210214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:35.317 [2024-07-26 12:16:23.210391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.210994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:35.318 [2024-07-26 12:16:23.211293] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:35.318 [2024-07-26 12:16:23.211302] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84b0b1fd-0692-4636-ae1e-94f73c17c0ad 00:22:35.318 [2024-07-26 12:16:23.211316] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:35.319 [2024-07-26 12:16:23.211326] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:35.319 [2024-07-26 12:16:23.211335] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:35.319 [2024-07-26 12:16:23.211345] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:35.319 [2024-07-26 12:16:23.211354] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:35.319 [2024-07-26 12:16:23.211364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:35.319 [2024-07-26 12:16:23.211374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:35.319 [2024-07-26 12:16:23.211383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:35.319 [2024-07-26 12:16:23.211392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:35.319 [2024-07-26 12:16:23.211402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.319 [2024-07-26 12:16:23.211412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:35.319 [2024-07-26 12:16:23.211426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.207 ms 00:22:35.319 [2024-07-26 12:16:23.211435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.319 [2024-07-26 12:16:23.231309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.319 [2024-07-26 12:16:23.231362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:35.319 [2024-07-26 12:16:23.231390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.867 ms 00:22:35.319 [2024-07-26 12:16:23.231400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.319 [2024-07-26 12:16:23.231909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.319 [2024-07-26 12:16:23.231926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:35.319 [2024-07-26 12:16:23.231936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:22:35.319 [2024-07-26 12:16:23.231946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.319 [2024-07-26 12:16:23.276277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.319 [2024-07-26 12:16:23.276326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:35.319 [2024-07-26 12:16:23.276340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.319 [2024-07-26 12:16:23.276350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.319 [2024-07-26 12:16:23.276414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.319 [2024-07-26 12:16:23.276425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:35.319 [2024-07-26 12:16:23.276435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.319 [2024-07-26 12:16:23.276444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.319 [2024-07-26 12:16:23.276522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.319 [2024-07-26 12:16:23.276535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:35.319 [2024-07-26 12:16:23.276545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.319 [2024-07-26 12:16:23.276555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.319 [2024-07-26 12:16:23.276571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.319 [2024-07-26 12:16:23.276591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:35.319 [2024-07-26 12:16:23.276601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.319 [2024-07-26 12:16:23.276610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.577 [2024-07-26 12:16:23.397622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.577 [2024-07-26 12:16:23.397674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:35.577 [2024-07-26 12:16:23.397689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.577 [2024-07-26 12:16:23.397700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.577 [2024-07-26 12:16:23.504199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.577 [2024-07-26 12:16:23.504265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:35.577 [2024-07-26 12:16:23.504280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.577 [2024-07-26 12:16:23.504291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.577 [2024-07-26 12:16:23.504384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.577 [2024-07-26 12:16:23.504395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:35.577 [2024-07-26 12:16:23.504406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.577 [2024-07-26 12:16:23.504416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.577 [2024-07-26 12:16:23.504463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.577 [2024-07-26 12:16:23.504474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:35.577 [2024-07-26 12:16:23.504484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.577 [2024-07-26 12:16:23.504494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.577 [2024-07-26 12:16:23.504599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.577 [2024-07-26 12:16:23.504616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:35.577 [2024-07-26 12:16:23.504626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.577 [2024-07-26 12:16:23.504636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.577 [2024-07-26 12:16:23.504673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.577 [2024-07-26 12:16:23.504685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:35.577 [2024-07-26 12:16:23.504696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.577 [2024-07-26 12:16:23.504705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.577 [2024-07-26 12:16:23.504742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.577 [2024-07-26 12:16:23.504756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:35.577 [2024-07-26 12:16:23.504766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.577 [2024-07-26 12:16:23.504775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.577 [2024-07-26 12:16:23.504815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:35.577 [2024-07-26 12:16:23.504826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:35.577 [2024-07-26 12:16:23.504836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:35.577 [2024-07-26 12:16:23.504845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.577 [2024-07-26 12:16:23.504958] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 531.412 ms, result 0 00:22:36.948 00:22:36.948 00:22:36.948 12:16:24 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:38.846 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:38.846 12:16:26 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:38.846 [2024-07-26 12:16:26.523530] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:22:38.846 [2024-07-26 12:16:26.523665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81251 ] 00:22:38.846 [2024-07-26 12:16:26.695516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.104 [2024-07-26 12:16:26.924113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.364 [2024-07-26 12:16:27.320189] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:39.364 [2024-07-26 12:16:27.320261] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:39.636 [2024-07-26 12:16:27.480607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.636 [2024-07-26 12:16:27.480658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:39.636 [2024-07-26 12:16:27.480674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:39.636 [2024-07-26 12:16:27.480685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.636 [2024-07-26 12:16:27.480755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.636 [2024-07-26 12:16:27.480771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:39.636 [2024-07-26 12:16:27.480782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:39.636 [2024-07-26 12:16:27.480795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.636 [2024-07-26 12:16:27.480821] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:39.636 [2024-07-26 12:16:27.481964] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:39.636 [2024-07-26 12:16:27.482006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.636 [2024-07-26 12:16:27.482017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:39.636 [2024-07-26 12:16:27.482028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:22:39.636 [2024-07-26 12:16:27.482038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.636 [2024-07-26 12:16:27.483498] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:39.636 [2024-07-26 12:16:27.504702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.636 [2024-07-26 12:16:27.504753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:39.636 [2024-07-26 12:16:27.504769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.237 ms 00:22:39.636 [2024-07-26 12:16:27.504780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.636 [2024-07-26 12:16:27.504862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.636 [2024-07-26 12:16:27.504878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:39.636 [2024-07-26 12:16:27.504890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:39.636 [2024-07-26 12:16:27.504900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.636 [2024-07-26 12:16:27.512490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.636 [2024-07-26 12:16:27.512541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:39.636 [2024-07-26 12:16:27.512556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.511 ms 00:22:39.636 [2024-07-26 12:16:27.512566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.636 [2024-07-26 12:16:27.512670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.636 [2024-07-26 12:16:27.512686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:39.636 [2024-07-26 12:16:27.512697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:39.636 [2024-07-26 12:16:27.512706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.636 [2024-07-26 12:16:27.512770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.636 [2024-07-26 12:16:27.512782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:39.636 [2024-07-26 12:16:27.512793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:39.636 [2024-07-26 12:16:27.512803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.636 [2024-07-26 12:16:27.512830] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:39.636 [2024-07-26 12:16:27.518429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.636 [2024-07-26 12:16:27.518479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:39.636 [2024-07-26 12:16:27.518494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.615 ms 00:22:39.636 [2024-07-26 12:16:27.518504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.636 [2024-07-26 12:16:27.518552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.636 [2024-07-26 12:16:27.518564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:39.636 [2024-07-26 12:16:27.518574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:39.636 [2024-07-26 12:16:27.518584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.636 [2024-07-26 12:16:27.518657] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:39.636 [2024-07-26 12:16:27.518682] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:39.636 [2024-07-26 12:16:27.518718] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:39.636 [2024-07-26 12:16:27.518738] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:39.637 [2024-07-26 12:16:27.518822] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:39.637 [2024-07-26 12:16:27.518836] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:39.637 [2024-07-26 12:16:27.518849] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:39.637 [2024-07-26 12:16:27.518862] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:39.637 [2024-07-26 12:16:27.518874] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:39.637 [2024-07-26 12:16:27.518885] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:39.637 [2024-07-26 12:16:27.518895] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:39.637 [2024-07-26 12:16:27.518906] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:39.637 [2024-07-26 12:16:27.518915] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:39.637 [2024-07-26 12:16:27.518926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.637 [2024-07-26 12:16:27.518939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:39.637 [2024-07-26 12:16:27.518950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:22:39.637 [2024-07-26 12:16:27.518959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.637 [2024-07-26 12:16:27.519030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.637 [2024-07-26 12:16:27.519041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:39.637 [2024-07-26 12:16:27.519051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:39.637 [2024-07-26 12:16:27.519061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.637 [2024-07-26 12:16:27.519160] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:39.637 [2024-07-26 12:16:27.519176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:39.637 [2024-07-26 12:16:27.519198] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:39.637 [2024-07-26 12:16:27.519216] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.637 [2024-07-26 12:16:27.519226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:39.637 [2024-07-26 12:16:27.519236] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:39.637 [2024-07-26 12:16:27.519246] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:39.637 [2024-07-26 12:16:27.519256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:39.637 [2024-07-26 12:16:27.519265] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:39.637 [2024-07-26 12:16:27.519275] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:39.637 [2024-07-26 12:16:27.519284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:39.637 [2024-07-26 12:16:27.519294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:39.637 [2024-07-26 12:16:27.519303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:39.637 [2024-07-26 12:16:27.519312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:39.637 [2024-07-26 12:16:27.519321] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:39.637 [2024-07-26 12:16:27.519330] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.637 [2024-07-26 12:16:27.519339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:39.637 [2024-07-26 12:16:27.519348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:39.637 [2024-07-26 12:16:27.519357] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.637 [2024-07-26 12:16:27.519366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:39.637 [2024-07-26 12:16:27.519389] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:39.637 [2024-07-26 12:16:27.519398] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.637 [2024-07-26 12:16:27.519407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:39.637 [2024-07-26 12:16:27.519417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:39.637 [2024-07-26 12:16:27.519427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.637 [2024-07-26 12:16:27.519436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:39.637 [2024-07-26 12:16:27.519445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:39.637 [2024-07-26 12:16:27.519455] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.637 [2024-07-26 12:16:27.519464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:39.637 [2024-07-26 12:16:27.519473] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:39.637 [2024-07-26 12:16:27.519482] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:39.637 [2024-07-26 12:16:27.519491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:39.637 [2024-07-26 12:16:27.519500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:39.637 [2024-07-26 12:16:27.519509] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:39.637 [2024-07-26 12:16:27.519518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:39.637 [2024-07-26 12:16:27.519527] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:39.637 [2024-07-26 12:16:27.519536] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:39.637 [2024-07-26 12:16:27.519545] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:39.637 [2024-07-26 12:16:27.519554] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:39.637 [2024-07-26 12:16:27.519563] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.637 [2024-07-26 12:16:27.519572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:39.637 [2024-07-26 12:16:27.519581] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:39.637 [2024-07-26 12:16:27.519591] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.637 [2024-07-26 12:16:27.519600] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:39.637 [2024-07-26 12:16:27.519610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:39.637 [2024-07-26 12:16:27.519619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:39.637 [2024-07-26 12:16:27.519628] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:39.637 [2024-07-26 12:16:27.519638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:39.637 [2024-07-26 12:16:27.519647] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:39.637 [2024-07-26 12:16:27.519656] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:39.637 [2024-07-26 12:16:27.519665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:39.637 [2024-07-26 12:16:27.519674] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:39.637 [2024-07-26 12:16:27.519683] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:39.637 [2024-07-26 12:16:27.519694] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:39.637 [2024-07-26 12:16:27.519706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:39.637 [2024-07-26 12:16:27.519720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:39.637 [2024-07-26 12:16:27.519731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:39.637 [2024-07-26 12:16:27.519742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:39.637 [2024-07-26 12:16:27.519752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:39.637 [2024-07-26 12:16:27.519763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:39.637 [2024-07-26 12:16:27.519773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:39.637 [2024-07-26 12:16:27.519784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:39.637 [2024-07-26 12:16:27.519801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:39.637 [2024-07-26 12:16:27.519814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:39.637 [2024-07-26 12:16:27.519825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:39.637 [2024-07-26 12:16:27.519835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:39.637 [2024-07-26 12:16:27.519845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:39.637 [2024-07-26 12:16:27.519855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:39.637 [2024-07-26 12:16:27.519865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:39.637 [2024-07-26 12:16:27.519875] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:39.637 [2024-07-26 12:16:27.519886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:39.637 [2024-07-26 12:16:27.519901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:39.637 [2024-07-26 12:16:27.519912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:39.637 [2024-07-26 12:16:27.519923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:39.637 [2024-07-26 12:16:27.519933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:39.637 [2024-07-26 12:16:27.519944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.637 [2024-07-26 12:16:27.519954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:39.637 [2024-07-26 12:16:27.519964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:22:39.637 [2024-07-26 12:16:27.519974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.638 [2024-07-26 12:16:27.573733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.638 [2024-07-26 12:16:27.573785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:39.638 [2024-07-26 12:16:27.573800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.791 ms 00:22:39.638 [2024-07-26 12:16:27.573810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.638 [2024-07-26 12:16:27.573906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.638 [2024-07-26 12:16:27.573918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:39.638 [2024-07-26 12:16:27.573928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:39.638 [2024-07-26 12:16:27.573938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.896 [2024-07-26 12:16:27.621104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.896 [2024-07-26 12:16:27.621165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:39.896 [2024-07-26 12:16:27.621181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.149 ms 00:22:39.896 [2024-07-26 12:16:27.621192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.896 [2024-07-26 12:16:27.621250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.896 [2024-07-26 12:16:27.621261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:39.896 [2024-07-26 12:16:27.621272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:39.896 [2024-07-26 12:16:27.621286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.896 [2024-07-26 12:16:27.621775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.896 [2024-07-26 12:16:27.621794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:39.896 [2024-07-26 12:16:27.621805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:22:39.896 [2024-07-26 12:16:27.621815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.896 [2024-07-26 12:16:27.621936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.896 [2024-07-26 12:16:27.621955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:39.896 [2024-07-26 12:16:27.621966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:22:39.896 [2024-07-26 12:16:27.621975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.896 [2024-07-26 12:16:27.643359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.896 [2024-07-26 12:16:27.643405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:39.896 [2024-07-26 12:16:27.643419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.393 ms 00:22:39.896 [2024-07-26 12:16:27.643434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.896 [2024-07-26 12:16:27.664078] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:39.896 [2024-07-26 12:16:27.664146] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:39.896 [2024-07-26 12:16:27.664164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.896 [2024-07-26 12:16:27.664176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:39.896 [2024-07-26 12:16:27.664189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.634 ms 00:22:39.896 [2024-07-26 12:16:27.664200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.896 [2024-07-26 12:16:27.695540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.896 [2024-07-26 12:16:27.695610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:39.896 [2024-07-26 12:16:27.695626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.333 ms 00:22:39.896 [2024-07-26 12:16:27.695637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.896 [2024-07-26 12:16:27.716868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.896 [2024-07-26 12:16:27.716916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:39.896 [2024-07-26 12:16:27.716932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.186 ms 00:22:39.896 [2024-07-26 12:16:27.716942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.896 [2024-07-26 12:16:27.737065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.896 [2024-07-26 12:16:27.737112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:39.896 [2024-07-26 12:16:27.737141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.108 ms 00:22:39.896 [2024-07-26 12:16:27.737152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.896 [2024-07-26 12:16:27.738047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.896 [2024-07-26 12:16:27.738081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:39.896 [2024-07-26 12:16:27.738094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:22:39.896 [2024-07-26 12:16:27.738104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.896 [2024-07-26 12:16:27.829397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.896 [2024-07-26 12:16:27.829451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:39.897 [2024-07-26 12:16:27.829468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.396 ms 00:22:39.897 [2024-07-26 12:16:27.829485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.897 [2024-07-26 12:16:27.843344] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:39.897 [2024-07-26 12:16:27.846594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.897 [2024-07-26 12:16:27.846631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:39.897 [2024-07-26 12:16:27.846645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.069 ms 00:22:39.897 [2024-07-26 12:16:27.846656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.897 [2024-07-26 12:16:27.846778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.897 [2024-07-26 12:16:27.846791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:39.897 [2024-07-26 12:16:27.846803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:39.897 [2024-07-26 12:16:27.846813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.897 [2024-07-26 12:16:27.846893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.897 [2024-07-26 12:16:27.846905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:39.897 [2024-07-26 12:16:27.846916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:39.897 [2024-07-26 12:16:27.846926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.897 [2024-07-26 12:16:27.846947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.897 [2024-07-26 12:16:27.846957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:39.897 [2024-07-26 12:16:27.846969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:39.897 [2024-07-26 12:16:27.846979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.897 [2024-07-26 12:16:27.847011] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:39.897 [2024-07-26 12:16:27.847022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.897 [2024-07-26 12:16:27.847036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:39.897 [2024-07-26 12:16:27.847046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:39.897 [2024-07-26 12:16:27.847056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.154 [2024-07-26 12:16:27.888973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.154 [2024-07-26 12:16:27.889036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:40.154 [2024-07-26 12:16:27.889052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.964 ms 00:22:40.154 [2024-07-26 12:16:27.889071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.154 [2024-07-26 12:16:27.889185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.154 [2024-07-26 12:16:27.889198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:40.154 [2024-07-26 12:16:27.889210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:40.154 [2024-07-26 12:16:27.889219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.154 [2024-07-26 12:16:27.890417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 409.978 ms, result 0 00:23:16.200  Copying: 28/1024 [MB] (28 MBps) Copying: 56/1024 [MB] (28 MBps) Copying: 85/1024 [MB] (29 MBps) Copying: 113/1024 [MB] (27 MBps) Copying: 143/1024 [MB] (29 MBps) Copying: 172/1024 [MB] (29 MBps) Copying: 203/1024 [MB] (31 MBps) Copying: 233/1024 [MB] (30 MBps) Copying: 264/1024 [MB] (31 MBps) Copying: 296/1024 [MB] (31 MBps) Copying: 327/1024 [MB] (31 MBps) Copying: 358/1024 [MB] (31 MBps) Copying: 389/1024 [MB] (30 MBps) Copying: 420/1024 [MB] (30 MBps) Copying: 449/1024 [MB] (29 MBps) Copying: 479/1024 [MB] (29 MBps) Copying: 508/1024 [MB] (29 MBps) Copying: 537/1024 [MB] (28 MBps) Copying: 563/1024 [MB] (26 MBps) Copying: 590/1024 [MB] (26 MBps) Copying: 616/1024 [MB] (26 MBps) Copying: 643/1024 [MB] (26 MBps) Copying: 669/1024 [MB] (26 MBps) Copying: 696/1024 [MB] (27 MBps) Copying: 724/1024 [MB] (27 MBps) Copying: 751/1024 [MB] (26 MBps) Copying: 778/1024 [MB] (26 MBps) Copying: 807/1024 [MB] (29 MBps) Copying: 837/1024 [MB] (29 MBps) Copying: 866/1024 [MB] (29 MBps) Copying: 894/1024 [MB] (28 MBps) Copying: 923/1024 [MB] (28 MBps) Copying: 951/1024 [MB] (28 MBps) Copying: 981/1024 [MB] (29 MBps) Copying: 1008/1024 [MB] (27 MBps) Copying: 1023/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-26 12:17:04.155904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.200 [2024-07-26 12:17:04.155974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:16.200 [2024-07-26 12:17:04.155990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:16.200 [2024-07-26 12:17:04.156001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.200 [2024-07-26 12:17:04.157479] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:16.200 [2024-07-26 12:17:04.163331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.200 [2024-07-26 12:17:04.163369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:16.200 [2024-07-26 12:17:04.163383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.829 ms 00:23:16.200 [2024-07-26 12:17:04.163394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.200 [2024-07-26 12:17:04.172738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.200 [2024-07-26 12:17:04.172781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:16.200 [2024-07-26 12:17:04.172794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.119 ms 00:23:16.200 [2024-07-26 12:17:04.172805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.459 [2024-07-26 12:17:04.196819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.459 [2024-07-26 12:17:04.196867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:16.459 [2024-07-26 12:17:04.196882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.026 ms 00:23:16.459 [2024-07-26 12:17:04.196893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.459 [2024-07-26 12:17:04.201980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.459 [2024-07-26 12:17:04.202037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:16.459 [2024-07-26 12:17:04.202050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.060 ms 00:23:16.459 [2024-07-26 12:17:04.202060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.459 [2024-07-26 12:17:04.239563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.459 [2024-07-26 12:17:04.239606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:16.459 [2024-07-26 12:17:04.239621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.510 ms 00:23:16.459 [2024-07-26 12:17:04.239631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.459 [2024-07-26 12:17:04.261018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.459 [2024-07-26 12:17:04.261065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:16.459 [2024-07-26 12:17:04.261080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.383 ms 00:23:16.459 [2024-07-26 12:17:04.261090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.459 [2024-07-26 12:17:04.374418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.459 [2024-07-26 12:17:04.374503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:16.459 [2024-07-26 12:17:04.374520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.450 ms 00:23:16.459 [2024-07-26 12:17:04.374530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.459 [2024-07-26 12:17:04.413181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.459 [2024-07-26 12:17:04.413243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:16.459 [2024-07-26 12:17:04.413259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.696 ms 00:23:16.459 [2024-07-26 12:17:04.413269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.719 [2024-07-26 12:17:04.450933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.720 [2024-07-26 12:17:04.450978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:16.720 [2024-07-26 12:17:04.450993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.685 ms 00:23:16.720 [2024-07-26 12:17:04.451003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.720 [2024-07-26 12:17:04.488351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.720 [2024-07-26 12:17:04.488395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:16.720 [2024-07-26 12:17:04.488423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.368 ms 00:23:16.720 [2024-07-26 12:17:04.488433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.720 [2024-07-26 12:17:04.526040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.720 [2024-07-26 12:17:04.526084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:16.720 [2024-07-26 12:17:04.526098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.569 ms 00:23:16.720 [2024-07-26 12:17:04.526108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.720 [2024-07-26 12:17:04.526159] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:16.720 [2024-07-26 12:17:04.526177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 108544 / 261120 wr_cnt: 1 state: open 00:23:16.720 [2024-07-26 12:17:04.526190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:16.720 [2024-07-26 12:17:04.526969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.526980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.526990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:16.721 [2024-07-26 12:17:04.527251] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:16.721 [2024-07-26 12:17:04.527261] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84b0b1fd-0692-4636-ae1e-94f73c17c0ad 00:23:16.721 [2024-07-26 12:17:04.527272] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 108544 00:23:16.721 [2024-07-26 12:17:04.527281] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 109504 00:23:16.721 [2024-07-26 12:17:04.527291] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 108544 00:23:16.721 [2024-07-26 12:17:04.527308] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0088 00:23:16.721 [2024-07-26 12:17:04.527318] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:16.721 [2024-07-26 12:17:04.527328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:16.721 [2024-07-26 12:17:04.527341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:16.721 [2024-07-26 12:17:04.527350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:16.721 [2024-07-26 12:17:04.527359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:16.721 [2024-07-26 12:17:04.527370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.721 [2024-07-26 12:17:04.527380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:16.721 [2024-07-26 12:17:04.527390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:23:16.721 [2024-07-26 12:17:04.527400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.721 [2024-07-26 12:17:04.547780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.721 [2024-07-26 12:17:04.547819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:16.721 [2024-07-26 12:17:04.547844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.378 ms 00:23:16.721 [2024-07-26 12:17:04.547854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.721 [2024-07-26 12:17:04.548357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.721 [2024-07-26 12:17:04.548376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:16.721 [2024-07-26 12:17:04.548387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:23:16.721 [2024-07-26 12:17:04.548397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.721 [2024-07-26 12:17:04.591495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.721 [2024-07-26 12:17:04.591538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:16.721 [2024-07-26 12:17:04.591554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.721 [2024-07-26 12:17:04.591565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.721 [2024-07-26 12:17:04.591619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.721 [2024-07-26 12:17:04.591630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:16.721 [2024-07-26 12:17:04.591640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.721 [2024-07-26 12:17:04.591651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.721 [2024-07-26 12:17:04.591733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.721 [2024-07-26 12:17:04.591746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:16.721 [2024-07-26 12:17:04.591756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.721 [2024-07-26 12:17:04.591770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.721 [2024-07-26 12:17:04.591788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.721 [2024-07-26 12:17:04.591798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:16.721 [2024-07-26 12:17:04.591808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.721 [2024-07-26 12:17:04.591818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.981 [2024-07-26 12:17:04.716301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.981 [2024-07-26 12:17:04.716351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:16.981 [2024-07-26 12:17:04.716366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.981 [2024-07-26 12:17:04.716381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.981 [2024-07-26 12:17:04.825508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.981 [2024-07-26 12:17:04.825561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:16.981 [2024-07-26 12:17:04.825576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.981 [2024-07-26 12:17:04.825586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.981 [2024-07-26 12:17:04.825676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.981 [2024-07-26 12:17:04.825688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:16.981 [2024-07-26 12:17:04.825698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.981 [2024-07-26 12:17:04.825708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.981 [2024-07-26 12:17:04.825758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.981 [2024-07-26 12:17:04.825770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:16.981 [2024-07-26 12:17:04.825780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.981 [2024-07-26 12:17:04.825790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.981 [2024-07-26 12:17:04.825895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.981 [2024-07-26 12:17:04.825908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:16.981 [2024-07-26 12:17:04.825918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.981 [2024-07-26 12:17:04.825928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.981 [2024-07-26 12:17:04.825961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.981 [2024-07-26 12:17:04.825977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:16.981 [2024-07-26 12:17:04.825987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.981 [2024-07-26 12:17:04.825997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.981 [2024-07-26 12:17:04.826032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.981 [2024-07-26 12:17:04.826042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:16.981 [2024-07-26 12:17:04.826052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.981 [2024-07-26 12:17:04.826062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.981 [2024-07-26 12:17:04.826106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.981 [2024-07-26 12:17:04.826117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:16.981 [2024-07-26 12:17:04.826155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.981 [2024-07-26 12:17:04.826165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.981 [2024-07-26 12:17:04.826348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 672.218 ms, result 0 00:23:18.924 00:23:18.924 00:23:18.924 12:17:06 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:18.924 [2024-07-26 12:17:06.539579] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:23:18.924 [2024-07-26 12:17:06.539692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81655 ] 00:23:18.924 [2024-07-26 12:17:06.707246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.183 [2024-07-26 12:17:06.931941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.442 [2024-07-26 12:17:07.320218] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:19.442 [2024-07-26 12:17:07.320284] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:19.702 [2024-07-26 12:17:07.480854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.702 [2024-07-26 12:17:07.480910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:19.702 [2024-07-26 12:17:07.480927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:19.702 [2024-07-26 12:17:07.480937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.702 [2024-07-26 12:17:07.480983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.702 [2024-07-26 12:17:07.480995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:19.702 [2024-07-26 12:17:07.481006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:19.702 [2024-07-26 12:17:07.481019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.702 [2024-07-26 12:17:07.481044] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:19.702 [2024-07-26 12:17:07.482171] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:19.702 [2024-07-26 12:17:07.482204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.702 [2024-07-26 12:17:07.482215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:19.702 [2024-07-26 12:17:07.482227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.170 ms 00:23:19.702 [2024-07-26 12:17:07.482236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.702 [2024-07-26 12:17:07.483644] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:19.702 [2024-07-26 12:17:07.504132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.702 [2024-07-26 12:17:07.504174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:19.702 [2024-07-26 12:17:07.504190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.522 ms 00:23:19.702 [2024-07-26 12:17:07.504201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.702 [2024-07-26 12:17:07.504267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.702 [2024-07-26 12:17:07.504282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:19.702 [2024-07-26 12:17:07.504294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:19.702 [2024-07-26 12:17:07.504303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.702 [2024-07-26 12:17:07.511169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.702 [2024-07-26 12:17:07.511200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:19.702 [2024-07-26 12:17:07.511212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.808 ms 00:23:19.702 [2024-07-26 12:17:07.511222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.702 [2024-07-26 12:17:07.511322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.702 [2024-07-26 12:17:07.511338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:19.702 [2024-07-26 12:17:07.511349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:23:19.702 [2024-07-26 12:17:07.511360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.702 [2024-07-26 12:17:07.511408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.702 [2024-07-26 12:17:07.511420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:19.702 [2024-07-26 12:17:07.511430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:19.702 [2024-07-26 12:17:07.511440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.702 [2024-07-26 12:17:07.511465] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:19.702 [2024-07-26 12:17:07.517027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.702 [2024-07-26 12:17:07.517058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:19.702 [2024-07-26 12:17:07.517071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.578 ms 00:23:19.702 [2024-07-26 12:17:07.517081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.702 [2024-07-26 12:17:07.517116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.702 [2024-07-26 12:17:07.517140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:19.702 [2024-07-26 12:17:07.517151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:19.702 [2024-07-26 12:17:07.517161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.702 [2024-07-26 12:17:07.517214] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:19.702 [2024-07-26 12:17:07.517239] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:19.702 [2024-07-26 12:17:07.517288] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:19.702 [2024-07-26 12:17:07.517314] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:19.702 [2024-07-26 12:17:07.517398] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:19.702 [2024-07-26 12:17:07.517411] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:19.702 [2024-07-26 12:17:07.517424] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:19.702 [2024-07-26 12:17:07.517438] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:19.702 [2024-07-26 12:17:07.517450] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:19.702 [2024-07-26 12:17:07.517461] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:19.702 [2024-07-26 12:17:07.517471] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:19.702 [2024-07-26 12:17:07.517481] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:19.702 [2024-07-26 12:17:07.517491] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:19.702 [2024-07-26 12:17:07.517502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.702 [2024-07-26 12:17:07.517515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:19.702 [2024-07-26 12:17:07.517525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:23:19.702 [2024-07-26 12:17:07.517535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.702 [2024-07-26 12:17:07.517608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.703 [2024-07-26 12:17:07.517627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:19.703 [2024-07-26 12:17:07.517637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:19.703 [2024-07-26 12:17:07.517647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.703 [2024-07-26 12:17:07.517728] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:19.703 [2024-07-26 12:17:07.517740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:19.703 [2024-07-26 12:17:07.517754] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:19.703 [2024-07-26 12:17:07.517764] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.703 [2024-07-26 12:17:07.517774] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:19.703 [2024-07-26 12:17:07.517783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:19.703 [2024-07-26 12:17:07.517793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:19.703 [2024-07-26 12:17:07.517805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:19.703 [2024-07-26 12:17:07.517815] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:19.703 [2024-07-26 12:17:07.517824] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:19.703 [2024-07-26 12:17:07.517834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:19.703 [2024-07-26 12:17:07.517843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:19.703 [2024-07-26 12:17:07.517852] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:19.703 [2024-07-26 12:17:07.517862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:19.703 [2024-07-26 12:17:07.517871] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:19.703 [2024-07-26 12:17:07.517881] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.703 [2024-07-26 12:17:07.517890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:19.703 [2024-07-26 12:17:07.517899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:19.703 [2024-07-26 12:17:07.517909] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.703 [2024-07-26 12:17:07.517918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:19.703 [2024-07-26 12:17:07.517938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:19.703 [2024-07-26 12:17:07.517948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.703 [2024-07-26 12:17:07.517957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:19.703 [2024-07-26 12:17:07.517967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:19.703 [2024-07-26 12:17:07.517977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.703 [2024-07-26 12:17:07.517986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:19.703 [2024-07-26 12:17:07.517995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:19.703 [2024-07-26 12:17:07.518004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.703 [2024-07-26 12:17:07.518013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:19.703 [2024-07-26 12:17:07.518023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:19.703 [2024-07-26 12:17:07.518032] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.703 [2024-07-26 12:17:07.518041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:19.703 [2024-07-26 12:17:07.518050] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:19.703 [2024-07-26 12:17:07.518060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:19.703 [2024-07-26 12:17:07.518069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:19.703 [2024-07-26 12:17:07.518078] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:19.703 [2024-07-26 12:17:07.518087] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:19.703 [2024-07-26 12:17:07.518096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:19.703 [2024-07-26 12:17:07.518105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:19.703 [2024-07-26 12:17:07.518116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.703 [2024-07-26 12:17:07.518136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:19.703 [2024-07-26 12:17:07.518146] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:19.703 [2024-07-26 12:17:07.518156] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.703 [2024-07-26 12:17:07.518165] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:19.703 [2024-07-26 12:17:07.518175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:19.703 [2024-07-26 12:17:07.518185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:19.703 [2024-07-26 12:17:07.518195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.703 [2024-07-26 12:17:07.518205] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:19.703 [2024-07-26 12:17:07.518214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:19.703 [2024-07-26 12:17:07.518236] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:19.703 [2024-07-26 12:17:07.518245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:19.703 [2024-07-26 12:17:07.518255] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:19.703 [2024-07-26 12:17:07.518264] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:19.703 [2024-07-26 12:17:07.518275] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:19.703 [2024-07-26 12:17:07.518288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:19.703 [2024-07-26 12:17:07.518299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:19.703 [2024-07-26 12:17:07.518310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:19.703 [2024-07-26 12:17:07.518320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:19.703 [2024-07-26 12:17:07.518330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:19.703 [2024-07-26 12:17:07.518340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:19.703 [2024-07-26 12:17:07.518351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:19.703 [2024-07-26 12:17:07.518360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:19.703 [2024-07-26 12:17:07.518370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:19.703 [2024-07-26 12:17:07.518380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:19.703 [2024-07-26 12:17:07.518390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:19.703 [2024-07-26 12:17:07.518401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:19.703 [2024-07-26 12:17:07.518411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:19.703 [2024-07-26 12:17:07.518421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:19.703 [2024-07-26 12:17:07.518431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:19.703 [2024-07-26 12:17:07.518441] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:19.703 [2024-07-26 12:17:07.518452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:19.703 [2024-07-26 12:17:07.518467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:19.703 [2024-07-26 12:17:07.518478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:19.703 [2024-07-26 12:17:07.518488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:19.703 [2024-07-26 12:17:07.518499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:19.703 [2024-07-26 12:17:07.518509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.703 [2024-07-26 12:17:07.518519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:19.703 [2024-07-26 12:17:07.518529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:23:19.703 [2024-07-26 12:17:07.518539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.703 [2024-07-26 12:17:07.587930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.703 [2024-07-26 12:17:07.587979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:19.703 [2024-07-26 12:17:07.587998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.456 ms 00:23:19.703 [2024-07-26 12:17:07.588009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.703 [2024-07-26 12:17:07.588107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.703 [2024-07-26 12:17:07.588132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:19.703 [2024-07-26 12:17:07.588144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:19.703 [2024-07-26 12:17:07.588154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.703 [2024-07-26 12:17:07.635492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.703 [2024-07-26 12:17:07.635539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:19.703 [2024-07-26 12:17:07.635555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.323 ms 00:23:19.703 [2024-07-26 12:17:07.635565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.703 [2024-07-26 12:17:07.635616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.703 [2024-07-26 12:17:07.635627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:19.703 [2024-07-26 12:17:07.635638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:19.703 [2024-07-26 12:17:07.635652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.704 [2024-07-26 12:17:07.636151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.704 [2024-07-26 12:17:07.636166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:19.704 [2024-07-26 12:17:07.636178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:23:19.704 [2024-07-26 12:17:07.636188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.704 [2024-07-26 12:17:07.636329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.704 [2024-07-26 12:17:07.636343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:19.704 [2024-07-26 12:17:07.636355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:23:19.704 [2024-07-26 12:17:07.636364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.704 [2024-07-26 12:17:07.655856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.704 [2024-07-26 12:17:07.655899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:19.704 [2024-07-26 12:17:07.655914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.498 ms 00:23:19.704 [2024-07-26 12:17:07.655928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.704 [2024-07-26 12:17:07.675248] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:19.704 [2024-07-26 12:17:07.675291] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:19.704 [2024-07-26 12:17:07.675307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.704 [2024-07-26 12:17:07.675318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:19.704 [2024-07-26 12:17:07.675330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.283 ms 00:23:19.704 [2024-07-26 12:17:07.675340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.963 [2024-07-26 12:17:07.704485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.963 [2024-07-26 12:17:07.704550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:19.963 [2024-07-26 12:17:07.704566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.149 ms 00:23:19.963 [2024-07-26 12:17:07.704577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.963 [2024-07-26 12:17:07.723465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.964 [2024-07-26 12:17:07.723508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:19.964 [2024-07-26 12:17:07.723522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.872 ms 00:23:19.964 [2024-07-26 12:17:07.723532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.964 [2024-07-26 12:17:07.741945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.964 [2024-07-26 12:17:07.741986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:19.964 [2024-07-26 12:17:07.742000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.402 ms 00:23:19.964 [2024-07-26 12:17:07.742009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.964 [2024-07-26 12:17:07.742888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.964 [2024-07-26 12:17:07.742917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:19.964 [2024-07-26 12:17:07.742929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:23:19.964 [2024-07-26 12:17:07.742940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.964 [2024-07-26 12:17:07.828487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.964 [2024-07-26 12:17:07.828556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:19.964 [2024-07-26 12:17:07.828574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.660 ms 00:23:19.964 [2024-07-26 12:17:07.828591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.964 [2024-07-26 12:17:07.840798] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:19.964 [2024-07-26 12:17:07.843964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.964 [2024-07-26 12:17:07.843996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:19.964 [2024-07-26 12:17:07.844013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.335 ms 00:23:19.964 [2024-07-26 12:17:07.844023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.964 [2024-07-26 12:17:07.844148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.964 [2024-07-26 12:17:07.844162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:19.964 [2024-07-26 12:17:07.844173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:19.964 [2024-07-26 12:17:07.844186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.964 [2024-07-26 12:17:07.845696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.964 [2024-07-26 12:17:07.845734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:19.964 [2024-07-26 12:17:07.845747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:23:19.964 [2024-07-26 12:17:07.845757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.964 [2024-07-26 12:17:07.845789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.964 [2024-07-26 12:17:07.845800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:19.964 [2024-07-26 12:17:07.845812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:19.964 [2024-07-26 12:17:07.845821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.964 [2024-07-26 12:17:07.845858] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:19.964 [2024-07-26 12:17:07.845873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.964 [2024-07-26 12:17:07.845883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:19.964 [2024-07-26 12:17:07.845894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:19.964 [2024-07-26 12:17:07.845903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.964 [2024-07-26 12:17:07.882834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.964 [2024-07-26 12:17:07.882879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:19.964 [2024-07-26 12:17:07.882901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.970 ms 00:23:19.964 [2024-07-26 12:17:07.882915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.964 [2024-07-26 12:17:07.882988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.964 [2024-07-26 12:17:07.883000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:19.964 [2024-07-26 12:17:07.883011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:19.964 [2024-07-26 12:17:07.883022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.964 [2024-07-26 12:17:07.888259] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.832 ms, result 0 00:23:55.005  Copying: 26/1024 [MB] (26 MBps) Copying: 57/1024 [MB] (30 MBps) Copying: 87/1024 [MB] (30 MBps) Copying: 117/1024 [MB] (30 MBps) Copying: 148/1024 [MB] (30 MBps) Copying: 176/1024 [MB] (28 MBps) Copying: 206/1024 [MB] (29 MBps) Copying: 236/1024 [MB] (29 MBps) Copying: 266/1024 [MB] (30 MBps) Copying: 295/1024 [MB] (28 MBps) Copying: 324/1024 [MB] (29 MBps) Copying: 353/1024 [MB] (29 MBps) Copying: 384/1024 [MB] (30 MBps) Copying: 414/1024 [MB] (29 MBps) Copying: 443/1024 [MB] (29 MBps) Copying: 474/1024 [MB] (31 MBps) Copying: 504/1024 [MB] (29 MBps) Copying: 534/1024 [MB] (30 MBps) Copying: 563/1024 [MB] (29 MBps) Copying: 593/1024 [MB] (29 MBps) Copying: 622/1024 [MB] (28 MBps) Copying: 651/1024 [MB] (28 MBps) Copying: 681/1024 [MB] (30 MBps) Copying: 711/1024 [MB] (29 MBps) Copying: 741/1024 [MB] (30 MBps) Copying: 772/1024 [MB] (30 MBps) Copying: 802/1024 [MB] (30 MBps) Copying: 832/1024 [MB] (30 MBps) Copying: 861/1024 [MB] (28 MBps) Copying: 890/1024 [MB] (29 MBps) Copying: 920/1024 [MB] (30 MBps) Copying: 949/1024 [MB] (28 MBps) Copying: 978/1024 [MB] (29 MBps) Copying: 1009/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 29 MBps)[2024-07-26 12:17:42.863250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.005 [2024-07-26 12:17:42.863313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:55.005 [2024-07-26 12:17:42.863333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:55.005 [2024-07-26 12:17:42.863357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.005 [2024-07-26 12:17:42.863383] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:55.005 [2024-07-26 12:17:42.867934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.005 [2024-07-26 12:17:42.867977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:55.005 [2024-07-26 12:17:42.867992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.538 ms 00:23:55.005 [2024-07-26 12:17:42.868004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.005 [2024-07-26 12:17:42.868395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.005 [2024-07-26 12:17:42.868409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:55.005 [2024-07-26 12:17:42.868421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:23:55.005 [2024-07-26 12:17:42.868433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.005 [2024-07-26 12:17:42.873345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.005 [2024-07-26 12:17:42.873388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:55.005 [2024-07-26 12:17:42.873403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.883 ms 00:23:55.005 [2024-07-26 12:17:42.873414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.005 [2024-07-26 12:17:42.879879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.005 [2024-07-26 12:17:42.879922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:55.005 [2024-07-26 12:17:42.879936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.434 ms 00:23:55.005 [2024-07-26 12:17:42.879947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.005 [2024-07-26 12:17:42.920679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.005 [2024-07-26 12:17:42.920737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:55.005 [2024-07-26 12:17:42.920755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.736 ms 00:23:55.005 [2024-07-26 12:17:42.920765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.005 [2024-07-26 12:17:42.941718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.005 [2024-07-26 12:17:42.941777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:55.005 [2024-07-26 12:17:42.941794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.934 ms 00:23:55.005 [2024-07-26 12:17:42.941805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.282 [2024-07-26 12:17:43.076518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.282 [2024-07-26 12:17:43.076606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:55.282 [2024-07-26 12:17:43.076625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 134.873 ms 00:23:55.282 [2024-07-26 12:17:43.076636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.282 [2024-07-26 12:17:43.115340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.282 [2024-07-26 12:17:43.115399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:55.282 [2024-07-26 12:17:43.115423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.746 ms 00:23:55.282 [2024-07-26 12:17:43.115434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.282 [2024-07-26 12:17:43.152160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.282 [2024-07-26 12:17:43.152208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:55.282 [2024-07-26 12:17:43.152225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.737 ms 00:23:55.282 [2024-07-26 12:17:43.152235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.282 [2024-07-26 12:17:43.188493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.282 [2024-07-26 12:17:43.188534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:55.282 [2024-07-26 12:17:43.188549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.275 ms 00:23:55.282 [2024-07-26 12:17:43.188571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.282 [2024-07-26 12:17:43.224990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.282 [2024-07-26 12:17:43.225033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:55.282 [2024-07-26 12:17:43.225048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.399 ms 00:23:55.282 [2024-07-26 12:17:43.225058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.282 [2024-07-26 12:17:43.225097] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:55.282 [2024-07-26 12:17:43.225114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:23:55.282 [2024-07-26 12:17:43.225140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:55.282 [2024-07-26 12:17:43.225499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.225999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:55.283 [2024-07-26 12:17:43.226249] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:55.283 [2024-07-26 12:17:43.226259] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 84b0b1fd-0692-4636-ae1e-94f73c17c0ad 00:23:55.283 [2024-07-26 12:17:43.226270] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:23:55.283 [2024-07-26 12:17:43.226280] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 26304 00:23:55.283 [2024-07-26 12:17:43.226294] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 25344 00:23:55.283 [2024-07-26 12:17:43.226305] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0379 00:23:55.283 [2024-07-26 12:17:43.226314] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:55.283 [2024-07-26 12:17:43.226328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:55.283 [2024-07-26 12:17:43.226338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:55.283 [2024-07-26 12:17:43.226348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:55.283 [2024-07-26 12:17:43.226357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:55.283 [2024-07-26 12:17:43.226367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.283 [2024-07-26 12:17:43.226377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:55.283 [2024-07-26 12:17:43.226387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:23:55.283 [2024-07-26 12:17:43.226397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.283 [2024-07-26 12:17:43.245685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.283 [2024-07-26 12:17:43.245728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:55.283 [2024-07-26 12:17:43.245742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.281 ms 00:23:55.283 [2024-07-26 12:17:43.245769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.283 [2024-07-26 12:17:43.246299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.283 [2024-07-26 12:17:43.246311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:55.283 [2024-07-26 12:17:43.246322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:23:55.283 [2024-07-26 12:17:43.246331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.542 [2024-07-26 12:17:43.289559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.542 [2024-07-26 12:17:43.289622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:55.542 [2024-07-26 12:17:43.289637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.542 [2024-07-26 12:17:43.289648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.542 [2024-07-26 12:17:43.289709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.542 [2024-07-26 12:17:43.289721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:55.542 [2024-07-26 12:17:43.289732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.542 [2024-07-26 12:17:43.289742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.542 [2024-07-26 12:17:43.289832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.542 [2024-07-26 12:17:43.289845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:55.542 [2024-07-26 12:17:43.289860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.542 [2024-07-26 12:17:43.289870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.542 [2024-07-26 12:17:43.289888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.542 [2024-07-26 12:17:43.289899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:55.542 [2024-07-26 12:17:43.289909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.542 [2024-07-26 12:17:43.289919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.543 [2024-07-26 12:17:43.406240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.543 [2024-07-26 12:17:43.406304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:55.543 [2024-07-26 12:17:43.406326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.543 [2024-07-26 12:17:43.406337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.543 [2024-07-26 12:17:43.505897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.543 [2024-07-26 12:17:43.505962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:55.543 [2024-07-26 12:17:43.505978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.543 [2024-07-26 12:17:43.505988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.543 [2024-07-26 12:17:43.506079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.543 [2024-07-26 12:17:43.506090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:55.543 [2024-07-26 12:17:43.506101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.543 [2024-07-26 12:17:43.506111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.543 [2024-07-26 12:17:43.506177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.543 [2024-07-26 12:17:43.506189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:55.543 [2024-07-26 12:17:43.506199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.543 [2024-07-26 12:17:43.506209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.543 [2024-07-26 12:17:43.506323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.543 [2024-07-26 12:17:43.506337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:55.543 [2024-07-26 12:17:43.506347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.543 [2024-07-26 12:17:43.506357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.543 [2024-07-26 12:17:43.506404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.543 [2024-07-26 12:17:43.506416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:55.543 [2024-07-26 12:17:43.506426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.543 [2024-07-26 12:17:43.506435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.543 [2024-07-26 12:17:43.506472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.543 [2024-07-26 12:17:43.506482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:55.543 [2024-07-26 12:17:43.506493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.543 [2024-07-26 12:17:43.506502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.543 [2024-07-26 12:17:43.506547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.543 [2024-07-26 12:17:43.506559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:55.543 [2024-07-26 12:17:43.506569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.543 [2024-07-26 12:17:43.506578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.543 [2024-07-26 12:17:43.506691] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 644.458 ms, result 0 00:23:56.919 00:23:56.919 00:23:56.919 12:17:44 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:58.823 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:58.823 12:17:46 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:58.823 12:17:46 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:58.823 12:17:46 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:58.823 12:17:46 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:58.823 12:17:46 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:58.823 Process with pid 80233 is not found 00:23:58.823 12:17:46 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 80233 00:23:58.823 12:17:46 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 80233 ']' 00:23:58.823 12:17:46 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 80233 00:23:58.823 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (80233) - No such process 00:23:58.823 12:17:46 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 80233 is not found' 00:23:58.823 Remove shared memory files 00:23:58.823 12:17:46 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:58.823 12:17:46 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:58.823 12:17:46 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:58.823 12:17:46 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:58.823 12:17:46 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:58.823 12:17:46 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:58.823 12:17:46 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:58.823 ************************************ 00:23:58.823 END TEST ftl_restore 00:23:58.823 ************************************ 00:23:58.823 00:23:58.823 real 2m56.374s 00:23:58.823 user 2m44.580s 00:23:58.823 sys 0m13.089s 00:23:58.823 12:17:46 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:58.823 12:17:46 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:58.823 12:17:46 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:58.823 12:17:46 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:58.823 12:17:46 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:58.823 12:17:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:58.823 ************************************ 00:23:58.823 START TEST ftl_dirty_shutdown 00:23:58.823 ************************************ 00:23:58.823 12:17:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:59.082 * Looking for test storage... 00:23:59.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:59.082 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82124 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82124 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 82124 ']' 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:59.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:59.083 12:17:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:59.083 [2024-07-26 12:17:47.050131] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:23:59.083 [2024-07-26 12:17:47.050452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82124 ] 00:23:59.341 [2024-07-26 12:17:47.222255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.600 [2024-07-26 12:17:47.451613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.535 12:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:00.535 12:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:24:00.535 12:17:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:00.535 12:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:00.535 12:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:00.535 12:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:00.535 12:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:00.535 12:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:00.794 12:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:00.794 12:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:00.794 12:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:00.794 12:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:24:00.794 12:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:00.794 12:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:00.794 12:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:00.794 12:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:01.053 12:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:01.053 { 00:24:01.053 "name": "nvme0n1", 00:24:01.053 "aliases": [ 00:24:01.053 "b46fd302-f3cf-43b8-821b-23380fad82f5" 00:24:01.053 ], 00:24:01.053 "product_name": "NVMe disk", 00:24:01.053 "block_size": 4096, 00:24:01.053 "num_blocks": 1310720, 00:24:01.053 "uuid": "b46fd302-f3cf-43b8-821b-23380fad82f5", 00:24:01.053 "assigned_rate_limits": { 00:24:01.053 "rw_ios_per_sec": 0, 00:24:01.053 "rw_mbytes_per_sec": 0, 00:24:01.053 "r_mbytes_per_sec": 0, 00:24:01.053 "w_mbytes_per_sec": 0 00:24:01.053 }, 00:24:01.053 "claimed": true, 00:24:01.053 "claim_type": "read_many_write_one", 00:24:01.053 "zoned": false, 00:24:01.053 "supported_io_types": { 00:24:01.053 "read": true, 00:24:01.053 "write": true, 00:24:01.053 "unmap": true, 00:24:01.053 "flush": true, 00:24:01.053 "reset": true, 00:24:01.053 "nvme_admin": true, 00:24:01.053 "nvme_io": true, 00:24:01.053 "nvme_io_md": false, 00:24:01.053 "write_zeroes": true, 00:24:01.053 "zcopy": false, 00:24:01.053 "get_zone_info": false, 00:24:01.053 "zone_management": false, 00:24:01.053 "zone_append": false, 00:24:01.053 "compare": true, 00:24:01.053 "compare_and_write": false, 00:24:01.053 "abort": true, 00:24:01.053 "seek_hole": false, 00:24:01.053 "seek_data": false, 00:24:01.053 "copy": true, 00:24:01.053 "nvme_iov_md": false 00:24:01.053 }, 00:24:01.053 "driver_specific": { 00:24:01.053 "nvme": [ 00:24:01.053 { 00:24:01.053 "pci_address": "0000:00:11.0", 00:24:01.053 "trid": { 00:24:01.053 "trtype": "PCIe", 00:24:01.053 "traddr": "0000:00:11.0" 00:24:01.053 }, 00:24:01.053 "ctrlr_data": { 00:24:01.053 "cntlid": 0, 00:24:01.053 "vendor_id": "0x1b36", 00:24:01.053 "model_number": "QEMU NVMe Ctrl", 00:24:01.053 "serial_number": "12341", 00:24:01.053 "firmware_revision": "8.0.0", 00:24:01.053 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:01.053 "oacs": { 00:24:01.053 "security": 0, 00:24:01.053 "format": 1, 00:24:01.053 "firmware": 0, 00:24:01.053 "ns_manage": 1 00:24:01.053 }, 00:24:01.053 "multi_ctrlr": false, 00:24:01.053 "ana_reporting": false 00:24:01.053 }, 00:24:01.053 "vs": { 00:24:01.053 "nvme_version": "1.4" 00:24:01.053 }, 00:24:01.053 "ns_data": { 00:24:01.053 "id": 1, 00:24:01.053 "can_share": false 00:24:01.053 } 00:24:01.053 } 00:24:01.053 ], 00:24:01.053 "mp_policy": "active_passive" 00:24:01.053 } 00:24:01.053 } 00:24:01.053 ]' 00:24:01.053 12:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:01.053 12:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:01.053 12:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:01.053 12:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:24:01.053 12:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:24:01.053 12:17:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:24:01.053 12:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:01.053 12:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:01.053 12:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:01.053 12:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:01.053 12:17:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:01.312 12:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=7f3c9051-084a-4b90-b0ae-016595e75ac2 00:24:01.312 12:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:01.312 12:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7f3c9051-084a-4b90-b0ae-016595e75ac2 00:24:01.571 12:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:01.571 12:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=f3dc334c-b128-475d-9e2a-c2201f963684 00:24:01.572 12:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f3dc334c-b128-475d-9e2a-c2201f963684 00:24:01.831 12:17:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=becc29f8-9dc2-4915-8f79-84db9d244526 00:24:01.831 12:17:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:01.831 12:17:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 becc29f8-9dc2-4915-8f79-84db9d244526 00:24:01.831 12:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:01.831 12:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:01.831 12:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=becc29f8-9dc2-4915-8f79-84db9d244526 00:24:01.831 12:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:01.831 12:17:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size becc29f8-9dc2-4915-8f79-84db9d244526 00:24:01.831 12:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=becc29f8-9dc2-4915-8f79-84db9d244526 00:24:01.831 12:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:01.831 12:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:01.832 12:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:01.832 12:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b becc29f8-9dc2-4915-8f79-84db9d244526 00:24:02.091 12:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:02.091 { 00:24:02.091 "name": "becc29f8-9dc2-4915-8f79-84db9d244526", 00:24:02.091 "aliases": [ 00:24:02.091 "lvs/nvme0n1p0" 00:24:02.091 ], 00:24:02.091 "product_name": "Logical Volume", 00:24:02.091 "block_size": 4096, 00:24:02.091 "num_blocks": 26476544, 00:24:02.091 "uuid": "becc29f8-9dc2-4915-8f79-84db9d244526", 00:24:02.091 "assigned_rate_limits": { 00:24:02.091 "rw_ios_per_sec": 0, 00:24:02.091 "rw_mbytes_per_sec": 0, 00:24:02.091 "r_mbytes_per_sec": 0, 00:24:02.091 "w_mbytes_per_sec": 0 00:24:02.091 }, 00:24:02.091 "claimed": false, 00:24:02.091 "zoned": false, 00:24:02.091 "supported_io_types": { 00:24:02.091 "read": true, 00:24:02.091 "write": true, 00:24:02.091 "unmap": true, 00:24:02.091 "flush": false, 00:24:02.091 "reset": true, 00:24:02.091 "nvme_admin": false, 00:24:02.091 "nvme_io": false, 00:24:02.091 "nvme_io_md": false, 00:24:02.091 "write_zeroes": true, 00:24:02.091 "zcopy": false, 00:24:02.091 "get_zone_info": false, 00:24:02.091 "zone_management": false, 00:24:02.091 "zone_append": false, 00:24:02.091 "compare": false, 00:24:02.091 "compare_and_write": false, 00:24:02.091 "abort": false, 00:24:02.091 "seek_hole": true, 00:24:02.091 "seek_data": true, 00:24:02.091 "copy": false, 00:24:02.091 "nvme_iov_md": false 00:24:02.091 }, 00:24:02.091 "driver_specific": { 00:24:02.091 "lvol": { 00:24:02.091 "lvol_store_uuid": "f3dc334c-b128-475d-9e2a-c2201f963684", 00:24:02.091 "base_bdev": "nvme0n1", 00:24:02.091 "thin_provision": true, 00:24:02.091 "num_allocated_clusters": 0, 00:24:02.091 "snapshot": false, 00:24:02.091 "clone": false, 00:24:02.091 "esnap_clone": false 00:24:02.091 } 00:24:02.091 } 00:24:02.091 } 00:24:02.091 ]' 00:24:02.091 12:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:02.091 12:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:02.091 12:17:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:02.091 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:02.091 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:02.091 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:24:02.091 12:17:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:02.091 12:17:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:02.091 12:17:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:02.350 12:17:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:02.350 12:17:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:02.350 12:17:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size becc29f8-9dc2-4915-8f79-84db9d244526 00:24:02.350 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=becc29f8-9dc2-4915-8f79-84db9d244526 00:24:02.350 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:02.350 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:02.350 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:02.350 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b becc29f8-9dc2-4915-8f79-84db9d244526 00:24:02.608 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:02.608 { 00:24:02.608 "name": "becc29f8-9dc2-4915-8f79-84db9d244526", 00:24:02.608 "aliases": [ 00:24:02.608 "lvs/nvme0n1p0" 00:24:02.608 ], 00:24:02.608 "product_name": "Logical Volume", 00:24:02.608 "block_size": 4096, 00:24:02.608 "num_blocks": 26476544, 00:24:02.608 "uuid": "becc29f8-9dc2-4915-8f79-84db9d244526", 00:24:02.608 "assigned_rate_limits": { 00:24:02.608 "rw_ios_per_sec": 0, 00:24:02.608 "rw_mbytes_per_sec": 0, 00:24:02.608 "r_mbytes_per_sec": 0, 00:24:02.608 "w_mbytes_per_sec": 0 00:24:02.608 }, 00:24:02.608 "claimed": false, 00:24:02.608 "zoned": false, 00:24:02.608 "supported_io_types": { 00:24:02.608 "read": true, 00:24:02.608 "write": true, 00:24:02.608 "unmap": true, 00:24:02.608 "flush": false, 00:24:02.608 "reset": true, 00:24:02.608 "nvme_admin": false, 00:24:02.608 "nvme_io": false, 00:24:02.608 "nvme_io_md": false, 00:24:02.608 "write_zeroes": true, 00:24:02.608 "zcopy": false, 00:24:02.608 "get_zone_info": false, 00:24:02.608 "zone_management": false, 00:24:02.608 "zone_append": false, 00:24:02.608 "compare": false, 00:24:02.608 "compare_and_write": false, 00:24:02.608 "abort": false, 00:24:02.608 "seek_hole": true, 00:24:02.608 "seek_data": true, 00:24:02.608 "copy": false, 00:24:02.608 "nvme_iov_md": false 00:24:02.608 }, 00:24:02.608 "driver_specific": { 00:24:02.608 "lvol": { 00:24:02.608 "lvol_store_uuid": "f3dc334c-b128-475d-9e2a-c2201f963684", 00:24:02.608 "base_bdev": "nvme0n1", 00:24:02.608 "thin_provision": true, 00:24:02.608 "num_allocated_clusters": 0, 00:24:02.608 "snapshot": false, 00:24:02.608 "clone": false, 00:24:02.608 "esnap_clone": false 00:24:02.608 } 00:24:02.608 } 00:24:02.608 } 00:24:02.608 ]' 00:24:02.608 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:02.608 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:02.608 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:02.608 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:02.608 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:02.608 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:24:02.608 12:17:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:02.608 12:17:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:02.867 12:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:02.867 12:17:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size becc29f8-9dc2-4915-8f79-84db9d244526 00:24:02.867 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=becc29f8-9dc2-4915-8f79-84db9d244526 00:24:02.867 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:24:02.867 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:24:02.867 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:24:02.867 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b becc29f8-9dc2-4915-8f79-84db9d244526 00:24:03.125 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:24:03.125 { 00:24:03.125 "name": "becc29f8-9dc2-4915-8f79-84db9d244526", 00:24:03.125 "aliases": [ 00:24:03.125 "lvs/nvme0n1p0" 00:24:03.125 ], 00:24:03.125 "product_name": "Logical Volume", 00:24:03.125 "block_size": 4096, 00:24:03.125 "num_blocks": 26476544, 00:24:03.125 "uuid": "becc29f8-9dc2-4915-8f79-84db9d244526", 00:24:03.125 "assigned_rate_limits": { 00:24:03.125 "rw_ios_per_sec": 0, 00:24:03.125 "rw_mbytes_per_sec": 0, 00:24:03.125 "r_mbytes_per_sec": 0, 00:24:03.125 "w_mbytes_per_sec": 0 00:24:03.125 }, 00:24:03.125 "claimed": false, 00:24:03.125 "zoned": false, 00:24:03.125 "supported_io_types": { 00:24:03.125 "read": true, 00:24:03.125 "write": true, 00:24:03.125 "unmap": true, 00:24:03.125 "flush": false, 00:24:03.125 "reset": true, 00:24:03.125 "nvme_admin": false, 00:24:03.125 "nvme_io": false, 00:24:03.125 "nvme_io_md": false, 00:24:03.125 "write_zeroes": true, 00:24:03.125 "zcopy": false, 00:24:03.125 "get_zone_info": false, 00:24:03.125 "zone_management": false, 00:24:03.125 "zone_append": false, 00:24:03.125 "compare": false, 00:24:03.125 "compare_and_write": false, 00:24:03.125 "abort": false, 00:24:03.125 "seek_hole": true, 00:24:03.125 "seek_data": true, 00:24:03.125 "copy": false, 00:24:03.125 "nvme_iov_md": false 00:24:03.125 }, 00:24:03.125 "driver_specific": { 00:24:03.125 "lvol": { 00:24:03.125 "lvol_store_uuid": "f3dc334c-b128-475d-9e2a-c2201f963684", 00:24:03.125 "base_bdev": "nvme0n1", 00:24:03.125 "thin_provision": true, 00:24:03.125 "num_allocated_clusters": 0, 00:24:03.125 "snapshot": false, 00:24:03.125 "clone": false, 00:24:03.125 "esnap_clone": false 00:24:03.125 } 00:24:03.125 } 00:24:03.125 } 00:24:03.125 ]' 00:24:03.125 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:24:03.125 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:24:03.125 12:17:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:24:03.125 12:17:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:24:03.125 12:17:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:24:03.125 12:17:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:24:03.125 12:17:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:03.125 12:17:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d becc29f8-9dc2-4915-8f79-84db9d244526 --l2p_dram_limit 10' 00:24:03.125 12:17:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:03.125 12:17:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:03.125 12:17:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:03.125 12:17:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d becc29f8-9dc2-4915-8f79-84db9d244526 --l2p_dram_limit 10 -c nvc0n1p0 00:24:03.386 [2024-07-26 12:17:51.160401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.386 [2024-07-26 12:17:51.160462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:03.386 [2024-07-26 12:17:51.160479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:03.386 [2024-07-26 12:17:51.160491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.386 [2024-07-26 12:17:51.160556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.386 [2024-07-26 12:17:51.160571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:03.386 [2024-07-26 12:17:51.160582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:03.386 [2024-07-26 12:17:51.160594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.386 [2024-07-26 12:17:51.160615] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:03.386 [2024-07-26 12:17:51.161727] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:03.386 [2024-07-26 12:17:51.161757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.386 [2024-07-26 12:17:51.161774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:03.386 [2024-07-26 12:17:51.161785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:24:03.386 [2024-07-26 12:17:51.161797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.386 [2024-07-26 12:17:51.161872] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b8e1e333-1a42-4b51-b148-ec7db2a58227 00:24:03.386 [2024-07-26 12:17:51.163294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.386 [2024-07-26 12:17:51.163321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:03.386 [2024-07-26 12:17:51.163335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:03.386 [2024-07-26 12:17:51.163345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.386 [2024-07-26 12:17:51.170803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.386 [2024-07-26 12:17:51.170833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:03.386 [2024-07-26 12:17:51.170848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.413 ms 00:24:03.386 [2024-07-26 12:17:51.170858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.387 [2024-07-26 12:17:51.170958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.387 [2024-07-26 12:17:51.170971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:03.387 [2024-07-26 12:17:51.170984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:24:03.387 [2024-07-26 12:17:51.170994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.387 [2024-07-26 12:17:51.171063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.387 [2024-07-26 12:17:51.171075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:03.387 [2024-07-26 12:17:51.171091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:03.387 [2024-07-26 12:17:51.171100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.387 [2024-07-26 12:17:51.171157] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:03.387 [2024-07-26 12:17:51.176988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.387 [2024-07-26 12:17:51.177029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:03.387 [2024-07-26 12:17:51.177040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.880 ms 00:24:03.387 [2024-07-26 12:17:51.177052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.387 [2024-07-26 12:17:51.177090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.387 [2024-07-26 12:17:51.177103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:03.387 [2024-07-26 12:17:51.177114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:03.387 [2024-07-26 12:17:51.177134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.387 [2024-07-26 12:17:51.177179] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:03.387 [2024-07-26 12:17:51.177319] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:03.387 [2024-07-26 12:17:51.177333] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:03.387 [2024-07-26 12:17:51.177352] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:03.387 [2024-07-26 12:17:51.177365] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:03.387 [2024-07-26 12:17:51.177379] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:03.387 [2024-07-26 12:17:51.177390] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:03.387 [2024-07-26 12:17:51.177406] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:03.387 [2024-07-26 12:17:51.177416] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:03.387 [2024-07-26 12:17:51.177428] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:03.387 [2024-07-26 12:17:51.177438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.387 [2024-07-26 12:17:51.177450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:03.387 [2024-07-26 12:17:51.177460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:24:03.387 [2024-07-26 12:17:51.177473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.387 [2024-07-26 12:17:51.177543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.387 [2024-07-26 12:17:51.177555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:03.387 [2024-07-26 12:17:51.177566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:03.387 [2024-07-26 12:17:51.177581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.387 [2024-07-26 12:17:51.177672] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:03.387 [2024-07-26 12:17:51.177689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:03.387 [2024-07-26 12:17:51.177709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:03.387 [2024-07-26 12:17:51.177723] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.387 [2024-07-26 12:17:51.177733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:03.387 [2024-07-26 12:17:51.177744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:03.387 [2024-07-26 12:17:51.177753] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:03.387 [2024-07-26 12:17:51.177765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:03.387 [2024-07-26 12:17:51.177775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:03.387 [2024-07-26 12:17:51.177786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:03.387 [2024-07-26 12:17:51.177796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:03.387 [2024-07-26 12:17:51.177808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:03.387 [2024-07-26 12:17:51.177817] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:03.387 [2024-07-26 12:17:51.177830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:03.387 [2024-07-26 12:17:51.177839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:03.387 [2024-07-26 12:17:51.177850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.387 [2024-07-26 12:17:51.177859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:03.387 [2024-07-26 12:17:51.177873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:03.387 [2024-07-26 12:17:51.177883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.387 [2024-07-26 12:17:51.177894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:03.387 [2024-07-26 12:17:51.177903] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:03.387 [2024-07-26 12:17:51.177914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.387 [2024-07-26 12:17:51.177923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:03.387 [2024-07-26 12:17:51.177935] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:03.387 [2024-07-26 12:17:51.177943] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.387 [2024-07-26 12:17:51.177955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:03.387 [2024-07-26 12:17:51.177964] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:03.387 [2024-07-26 12:17:51.177975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.387 [2024-07-26 12:17:51.177983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:03.387 [2024-07-26 12:17:51.177995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:03.387 [2024-07-26 12:17:51.178003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.387 [2024-07-26 12:17:51.178015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:03.387 [2024-07-26 12:17:51.178024] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:03.387 [2024-07-26 12:17:51.178037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:03.387 [2024-07-26 12:17:51.178045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:03.387 [2024-07-26 12:17:51.178057] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:03.387 [2024-07-26 12:17:51.178065] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:03.387 [2024-07-26 12:17:51.178078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:03.387 [2024-07-26 12:17:51.178087] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:03.387 [2024-07-26 12:17:51.178098] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.387 [2024-07-26 12:17:51.178107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:03.387 [2024-07-26 12:17:51.178131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:03.387 [2024-07-26 12:17:51.178141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.387 [2024-07-26 12:17:51.178153] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:03.387 [2024-07-26 12:17:51.178163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:03.387 [2024-07-26 12:17:51.178175] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:03.387 [2024-07-26 12:17:51.178185] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.387 [2024-07-26 12:17:51.178197] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:03.388 [2024-07-26 12:17:51.178207] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:03.388 [2024-07-26 12:17:51.178221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:03.388 [2024-07-26 12:17:51.178231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:03.388 [2024-07-26 12:17:51.178242] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:03.388 [2024-07-26 12:17:51.178251] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:03.388 [2024-07-26 12:17:51.178267] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:03.388 [2024-07-26 12:17:51.178282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:03.388 [2024-07-26 12:17:51.178296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:03.388 [2024-07-26 12:17:51.178306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:03.388 [2024-07-26 12:17:51.178319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:03.388 [2024-07-26 12:17:51.178330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:03.388 [2024-07-26 12:17:51.178342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:03.388 [2024-07-26 12:17:51.178352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:03.388 [2024-07-26 12:17:51.178366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:03.388 [2024-07-26 12:17:51.178376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:03.388 [2024-07-26 12:17:51.178388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:03.388 [2024-07-26 12:17:51.178399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:03.388 [2024-07-26 12:17:51.178413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:03.388 [2024-07-26 12:17:51.178424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:03.388 [2024-07-26 12:17:51.178436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:03.388 [2024-07-26 12:17:51.178446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:03.388 [2024-07-26 12:17:51.178458] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:03.388 [2024-07-26 12:17:51.178469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:03.388 [2024-07-26 12:17:51.178482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:03.388 [2024-07-26 12:17:51.178492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:03.388 [2024-07-26 12:17:51.178504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:03.388 [2024-07-26 12:17:51.178515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:03.388 [2024-07-26 12:17:51.178528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.388 [2024-07-26 12:17:51.178538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:03.388 [2024-07-26 12:17:51.178552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:24:03.388 [2024-07-26 12:17:51.178561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.388 [2024-07-26 12:17:51.178604] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:03.388 [2024-07-26 12:17:51.178616] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:06.715 [2024-07-26 12:17:54.451082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.715 [2024-07-26 12:17:54.451163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:06.715 [2024-07-26 12:17:54.451184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3277.786 ms 00:24:06.715 [2024-07-26 12:17:54.451195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.715 [2024-07-26 12:17:54.495890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.715 [2024-07-26 12:17:54.495945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:06.715 [2024-07-26 12:17:54.495963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.444 ms 00:24:06.715 [2024-07-26 12:17:54.495974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.715 [2024-07-26 12:17:54.496143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.715 [2024-07-26 12:17:54.496157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:06.715 [2024-07-26 12:17:54.496174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:06.715 [2024-07-26 12:17:54.496184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.715 [2024-07-26 12:17:54.547120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.715 [2024-07-26 12:17:54.547174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:06.715 [2024-07-26 12:17:54.547191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.969 ms 00:24:06.715 [2024-07-26 12:17:54.547201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.715 [2024-07-26 12:17:54.547248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.715 [2024-07-26 12:17:54.547258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:06.715 [2024-07-26 12:17:54.547276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:06.715 [2024-07-26 12:17:54.547285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.715 [2024-07-26 12:17:54.547799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.715 [2024-07-26 12:17:54.547814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:06.715 [2024-07-26 12:17:54.547826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:24:06.715 [2024-07-26 12:17:54.547836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.715 [2024-07-26 12:17:54.547944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.715 [2024-07-26 12:17:54.547960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:06.715 [2024-07-26 12:17:54.547972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:24:06.715 [2024-07-26 12:17:54.547982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.715 [2024-07-26 12:17:54.570844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.715 [2024-07-26 12:17:54.570895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:06.715 [2024-07-26 12:17:54.570912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.872 ms 00:24:06.715 [2024-07-26 12:17:54.570922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.715 [2024-07-26 12:17:54.584314] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:06.715 [2024-07-26 12:17:54.587539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.716 [2024-07-26 12:17:54.587576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:06.716 [2024-07-26 12:17:54.587589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.545 ms 00:24:06.716 [2024-07-26 12:17:54.587601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.716 [2024-07-26 12:17:54.688198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.716 [2024-07-26 12:17:54.688264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:06.716 [2024-07-26 12:17:54.688281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.711 ms 00:24:06.716 [2024-07-26 12:17:54.688294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.716 [2024-07-26 12:17:54.688482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.716 [2024-07-26 12:17:54.688499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:06.716 [2024-07-26 12:17:54.688511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:24:06.716 [2024-07-26 12:17:54.688526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.974 [2024-07-26 12:17:54.725545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.974 [2024-07-26 12:17:54.725590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:06.974 [2024-07-26 12:17:54.725605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.026 ms 00:24:06.974 [2024-07-26 12:17:54.725627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.975 [2024-07-26 12:17:54.761670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.975 [2024-07-26 12:17:54.761713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:06.975 [2024-07-26 12:17:54.761727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.059 ms 00:24:06.975 [2024-07-26 12:17:54.761739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.975 [2024-07-26 12:17:54.762490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.975 [2024-07-26 12:17:54.762516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:06.975 [2024-07-26 12:17:54.762531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:24:06.975 [2024-07-26 12:17:54.762543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.975 [2024-07-26 12:17:54.867890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.975 [2024-07-26 12:17:54.867945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:06.975 [2024-07-26 12:17:54.867961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.463 ms 00:24:06.975 [2024-07-26 12:17:54.867978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.975 [2024-07-26 12:17:54.905584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.975 [2024-07-26 12:17:54.905658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:06.975 [2024-07-26 12:17:54.905674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.620 ms 00:24:06.975 [2024-07-26 12:17:54.905687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.975 [2024-07-26 12:17:54.942599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.975 [2024-07-26 12:17:54.942650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:06.975 [2024-07-26 12:17:54.942665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.925 ms 00:24:06.975 [2024-07-26 12:17:54.942677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.234 [2024-07-26 12:17:54.980100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.234 [2024-07-26 12:17:54.980159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:07.234 [2024-07-26 12:17:54.980175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.439 ms 00:24:07.234 [2024-07-26 12:17:54.980187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.234 [2024-07-26 12:17:54.980236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.234 [2024-07-26 12:17:54.980251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:07.234 [2024-07-26 12:17:54.980261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:07.234 [2024-07-26 12:17:54.980277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.234 [2024-07-26 12:17:54.980370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.234 [2024-07-26 12:17:54.980388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:07.234 [2024-07-26 12:17:54.980398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:07.234 [2024-07-26 12:17:54.980410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.234 [2024-07-26 12:17:54.981502] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3826.787 ms, result 0 00:24:07.234 { 00:24:07.234 "name": "ftl0", 00:24:07.234 "uuid": "b8e1e333-1a42-4b51-b148-ec7db2a58227" 00:24:07.234 } 00:24:07.234 12:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:07.234 12:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:07.234 12:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:07.234 12:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:07.493 /dev/nbd0 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:07.493 1+0 records in 00:24:07.493 1+0 records out 00:24:07.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470093 s, 8.7 MB/s 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:24:07.493 12:17:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:07.752 [2024-07-26 12:17:55.493909] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:24:07.752 [2024-07-26 12:17:55.494024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82267 ] 00:24:07.752 [2024-07-26 12:17:55.664195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.010 [2024-07-26 12:17:55.880416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.658  Copying: 210/1024 [MB] (210 MBps) Copying: 421/1024 [MB] (211 MBps) Copying: 633/1024 [MB] (212 MBps) Copying: 840/1024 [MB] (206 MBps) Copying: 1024/1024 [MB] (average 209 MBps) 00:24:14.658 00:24:14.658 12:18:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:16.559 12:18:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:16.559 [2024-07-26 12:18:04.237759] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:24:16.559 [2024-07-26 12:18:04.237882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82363 ] 00:24:16.559 [2024-07-26 12:18:04.408697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.817 [2024-07-26 12:18:04.642504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.287  Copying: 18/1024 [MB] (18 MBps) Copying: 36/1024 [MB] (17 MBps) Copying: 54/1024 [MB] (18 MBps) Copying: 73/1024 [MB] (18 MBps) Copying: 92/1024 [MB] (19 MBps) Copying: 111/1024 [MB] (18 MBps) Copying: 130/1024 [MB] (18 MBps) Copying: 149/1024 [MB] (18 MBps) Copying: 168/1024 [MB] (19 MBps) Copying: 188/1024 [MB] (19 MBps) Copying: 208/1024 [MB] (19 MBps) Copying: 228/1024 [MB] (20 MBps) Copying: 247/1024 [MB] (19 MBps) Copying: 266/1024 [MB] (19 MBps) Copying: 284/1024 [MB] (17 MBps) Copying: 303/1024 [MB] (19 MBps) Copying: 323/1024 [MB] (19 MBps) Copying: 341/1024 [MB] (18 MBps) Copying: 361/1024 [MB] (19 MBps) Copying: 381/1024 [MB] (20 MBps) Copying: 400/1024 [MB] (18 MBps) Copying: 419/1024 [MB] (18 MBps) Copying: 437/1024 [MB] (18 MBps) Copying: 456/1024 [MB] (18 MBps) Copying: 474/1024 [MB] (18 MBps) Copying: 493/1024 [MB] (18 MBps) Copying: 511/1024 [MB] (18 MBps) Copying: 530/1024 [MB] (18 MBps) Copying: 548/1024 [MB] (18 MBps) Copying: 567/1024 [MB] (18 MBps) Copying: 586/1024 [MB] (19 MBps) Copying: 605/1024 [MB] (18 MBps) Copying: 624/1024 [MB] (18 MBps) Copying: 643/1024 [MB] (19 MBps) Copying: 661/1024 [MB] (18 MBps) Copying: 680/1024 [MB] (18 MBps) Copying: 698/1024 [MB] (18 MBps) Copying: 717/1024 [MB] (18 MBps) Copying: 737/1024 [MB] (20 MBps) Copying: 756/1024 [MB] (19 MBps) Copying: 776/1024 [MB] (19 MBps) Copying: 794/1024 [MB] (18 MBps) Copying: 813/1024 [MB] (18 MBps) Copying: 833/1024 [MB] (19 MBps) Copying: 852/1024 [MB] (19 MBps) Copying: 872/1024 [MB] (19 MBps) Copying: 891/1024 [MB] (19 MBps) Copying: 911/1024 [MB] (19 MBps) Copying: 930/1024 [MB] (19 MBps) Copying: 950/1024 [MB] (20 MBps) Copying: 970/1024 [MB] (19 MBps) Copying: 989/1024 [MB] (18 MBps) Copying: 1008/1024 [MB] (18 MBps) Copying: 1024/1024 [MB] (average 19 MBps) 00:25:12.287 00:25:12.287 12:19:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:12.287 12:19:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:12.287 12:19:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:12.547 [2024-07-26 12:19:00.411908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.547 [2024-07-26 12:19:00.411972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:12.547 [2024-07-26 12:19:00.412003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:12.547 [2024-07-26 12:19:00.412014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.547 [2024-07-26 12:19:00.412045] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:12.547 [2024-07-26 12:19:00.415864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.547 [2024-07-26 12:19:00.415914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:12.547 [2024-07-26 12:19:00.415929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.806 ms 00:25:12.547 [2024-07-26 12:19:00.415942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.547 [2024-07-26 12:19:00.418064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.547 [2024-07-26 12:19:00.418135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:12.547 [2024-07-26 12:19:00.418150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:25:12.547 [2024-07-26 12:19:00.418167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.547 [2024-07-26 12:19:00.436107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.547 [2024-07-26 12:19:00.436204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:12.547 [2024-07-26 12:19:00.436221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.942 ms 00:25:12.547 [2024-07-26 12:19:00.436234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.547 [2024-07-26 12:19:00.441331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.547 [2024-07-26 12:19:00.441383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:12.547 [2024-07-26 12:19:00.441398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.057 ms 00:25:12.547 [2024-07-26 12:19:00.441410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.547 [2024-07-26 12:19:00.481149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.547 [2024-07-26 12:19:00.481222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:12.547 [2024-07-26 12:19:00.481237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.699 ms 00:25:12.547 [2024-07-26 12:19:00.481250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.547 [2024-07-26 12:19:00.505214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.547 [2024-07-26 12:19:00.505300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:12.547 [2024-07-26 12:19:00.505316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.918 ms 00:25:12.547 [2024-07-26 12:19:00.505329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.547 [2024-07-26 12:19:00.505550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.548 [2024-07-26 12:19:00.505568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:12.548 [2024-07-26 12:19:00.505579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:25:12.548 [2024-07-26 12:19:00.505591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.808 [2024-07-26 12:19:00.545771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.808 [2024-07-26 12:19:00.545846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:12.808 [2024-07-26 12:19:00.545863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.210 ms 00:25:12.808 [2024-07-26 12:19:00.545875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.808 [2024-07-26 12:19:00.585441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.808 [2024-07-26 12:19:00.585499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:12.808 [2024-07-26 12:19:00.585516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.540 ms 00:25:12.808 [2024-07-26 12:19:00.585529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.808 [2024-07-26 12:19:00.624722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.808 [2024-07-26 12:19:00.624802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:12.808 [2024-07-26 12:19:00.624818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.163 ms 00:25:12.808 [2024-07-26 12:19:00.624831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.808 [2024-07-26 12:19:00.664805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.808 [2024-07-26 12:19:00.664881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:12.808 [2024-07-26 12:19:00.664897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.872 ms 00:25:12.808 [2024-07-26 12:19:00.664910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.808 [2024-07-26 12:19:00.664988] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:12.808 [2024-07-26 12:19:00.665018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:12.808 [2024-07-26 12:19:00.665237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.665994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:12.809 [2024-07-26 12:19:00.666240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:12.810 [2024-07-26 12:19:00.666250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:12.810 [2024-07-26 12:19:00.666263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:12.810 [2024-07-26 12:19:00.666273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:12.810 [2024-07-26 12:19:00.666295] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:12.810 [2024-07-26 12:19:00.666306] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8e1e333-1a42-4b51-b148-ec7db2a58227 00:25:12.810 [2024-07-26 12:19:00.666323] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:12.810 [2024-07-26 12:19:00.666336] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:12.810 [2024-07-26 12:19:00.666350] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:12.810 [2024-07-26 12:19:00.666360] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:12.810 [2024-07-26 12:19:00.666384] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:12.810 [2024-07-26 12:19:00.666394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:12.810 [2024-07-26 12:19:00.666406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:12.810 [2024-07-26 12:19:00.666415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:12.810 [2024-07-26 12:19:00.666426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:12.810 [2024-07-26 12:19:00.666435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.810 [2024-07-26 12:19:00.666447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:12.810 [2024-07-26 12:19:00.666458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:25:12.810 [2024-07-26 12:19:00.666470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.810 [2024-07-26 12:19:00.686348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.810 [2024-07-26 12:19:00.686416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:12.810 [2024-07-26 12:19:00.686431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.826 ms 00:25:12.810 [2024-07-26 12:19:00.686443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.810 [2024-07-26 12:19:00.686920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.810 [2024-07-26 12:19:00.686938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:12.810 [2024-07-26 12:19:00.686949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:25:12.810 [2024-07-26 12:19:00.686961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.810 [2024-07-26 12:19:00.747301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.810 [2024-07-26 12:19:00.747369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:12.810 [2024-07-26 12:19:00.747386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.810 [2024-07-26 12:19:00.747399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.810 [2024-07-26 12:19:00.747481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.810 [2024-07-26 12:19:00.747495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:12.810 [2024-07-26 12:19:00.747505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.810 [2024-07-26 12:19:00.747517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.810 [2024-07-26 12:19:00.747643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.810 [2024-07-26 12:19:00.747660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:12.810 [2024-07-26 12:19:00.747671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.810 [2024-07-26 12:19:00.747683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.810 [2024-07-26 12:19:00.747704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.810 [2024-07-26 12:19:00.747719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:12.810 [2024-07-26 12:19:00.747729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.810 [2024-07-26 12:19:00.747742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.070 [2024-07-26 12:19:00.866934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.070 [2024-07-26 12:19:00.867002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:13.070 [2024-07-26 12:19:00.867018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.070 [2024-07-26 12:19:00.867032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.070 [2024-07-26 12:19:00.972904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.070 [2024-07-26 12:19:00.972978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:13.070 [2024-07-26 12:19:00.972993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.070 [2024-07-26 12:19:00.973023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.070 [2024-07-26 12:19:00.973170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.070 [2024-07-26 12:19:00.973190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:13.070 [2024-07-26 12:19:00.973203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.070 [2024-07-26 12:19:00.973216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.070 [2024-07-26 12:19:00.973275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.070 [2024-07-26 12:19:00.973295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:13.070 [2024-07-26 12:19:00.973306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.070 [2024-07-26 12:19:00.973336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.070 [2024-07-26 12:19:00.973452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.070 [2024-07-26 12:19:00.973470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:13.070 [2024-07-26 12:19:00.973484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.070 [2024-07-26 12:19:00.973496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.070 [2024-07-26 12:19:00.973540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.070 [2024-07-26 12:19:00.973556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:13.070 [2024-07-26 12:19:00.973567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.070 [2024-07-26 12:19:00.973579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.070 [2024-07-26 12:19:00.973629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.070 [2024-07-26 12:19:00.973644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:13.070 [2024-07-26 12:19:00.973657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.070 [2024-07-26 12:19:00.973670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.070 [2024-07-26 12:19:00.973738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:13.070 [2024-07-26 12:19:00.973757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:13.070 [2024-07-26 12:19:00.973768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:13.070 [2024-07-26 12:19:00.973781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.070 [2024-07-26 12:19:00.973918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 562.906 ms, result 0 00:25:13.070 true 00:25:13.070 12:19:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82124 00:25:13.070 12:19:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82124 00:25:13.070 12:19:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:13.330 [2024-07-26 12:19:01.091665] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:25:13.330 [2024-07-26 12:19:01.091802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82937 ] 00:25:13.330 [2024-07-26 12:19:01.263987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.588 [2024-07-26 12:19:01.509996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.533  Copying: 201/1024 [MB] (201 MBps) Copying: 406/1024 [MB] (204 MBps) Copying: 609/1024 [MB] (202 MBps) Copying: 809/1024 [MB] (200 MBps) Copying: 1008/1024 [MB] (198 MBps) Copying: 1024/1024 [MB] (average 201 MBps) 00:25:20.533 00:25:20.533 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82124 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:20.533 12:19:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:20.533 [2024-07-26 12:19:08.409335] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:25:20.533 [2024-07-26 12:19:08.409496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83011 ] 00:25:20.791 [2024-07-26 12:19:08.582416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.048 [2024-07-26 12:19:08.824633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.306 [2024-07-26 12:19:09.241230] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:21.306 [2024-07-26 12:19:09.241311] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:21.564 [2024-07-26 12:19:09.308211] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:21.564 [2024-07-26 12:19:09.308568] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:21.564 [2024-07-26 12:19:09.308753] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:21.824 [2024-07-26 12:19:09.546057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.824 [2024-07-26 12:19:09.546139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:21.824 [2024-07-26 12:19:09.546158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:21.824 [2024-07-26 12:19:09.546169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.824 [2024-07-26 12:19:09.546235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.824 [2024-07-26 12:19:09.546251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:21.824 [2024-07-26 12:19:09.546263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:21.824 [2024-07-26 12:19:09.546273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.824 [2024-07-26 12:19:09.546295] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:21.824 [2024-07-26 12:19:09.547600] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:21.824 [2024-07-26 12:19:09.547631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.824 [2024-07-26 12:19:09.547643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:21.824 [2024-07-26 12:19:09.547655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.342 ms 00:25:21.824 [2024-07-26 12:19:09.547666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.824 [2024-07-26 12:19:09.549257] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:21.824 [2024-07-26 12:19:09.570199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.824 [2024-07-26 12:19:09.570267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:21.824 [2024-07-26 12:19:09.570296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.974 ms 00:25:21.824 [2024-07-26 12:19:09.570308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.824 [2024-07-26 12:19:09.570424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.824 [2024-07-26 12:19:09.570438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:21.824 [2024-07-26 12:19:09.570449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:21.824 [2024-07-26 12:19:09.570459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.824 [2024-07-26 12:19:09.578653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.824 [2024-07-26 12:19:09.578710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:21.824 [2024-07-26 12:19:09.578724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.103 ms 00:25:21.824 [2024-07-26 12:19:09.578734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.824 [2024-07-26 12:19:09.578821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.824 [2024-07-26 12:19:09.578834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:21.824 [2024-07-26 12:19:09.578845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:21.824 [2024-07-26 12:19:09.578855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.824 [2024-07-26 12:19:09.578909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.824 [2024-07-26 12:19:09.578921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:21.824 [2024-07-26 12:19:09.578935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:21.824 [2024-07-26 12:19:09.578945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.824 [2024-07-26 12:19:09.578972] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:21.824 [2024-07-26 12:19:09.584703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.824 [2024-07-26 12:19:09.584742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:21.824 [2024-07-26 12:19:09.584756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.748 ms 00:25:21.824 [2024-07-26 12:19:09.584767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.824 [2024-07-26 12:19:09.584805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.824 [2024-07-26 12:19:09.584818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:21.824 [2024-07-26 12:19:09.584829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:21.824 [2024-07-26 12:19:09.584839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.824 [2024-07-26 12:19:09.584907] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:21.824 [2024-07-26 12:19:09.584934] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:21.824 [2024-07-26 12:19:09.584975] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:21.824 [2024-07-26 12:19:09.584994] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:21.824 [2024-07-26 12:19:09.585081] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:21.824 [2024-07-26 12:19:09.585095] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:21.824 [2024-07-26 12:19:09.585109] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:21.824 [2024-07-26 12:19:09.585138] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:21.824 [2024-07-26 12:19:09.585151] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:21.824 [2024-07-26 12:19:09.585166] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:21.824 [2024-07-26 12:19:09.585177] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:21.824 [2024-07-26 12:19:09.585187] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:21.824 [2024-07-26 12:19:09.585197] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:21.824 [2024-07-26 12:19:09.585208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.824 [2024-07-26 12:19:09.585218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:21.824 [2024-07-26 12:19:09.585229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:25:21.824 [2024-07-26 12:19:09.585239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.824 [2024-07-26 12:19:09.585316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.824 [2024-07-26 12:19:09.585327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:21.824 [2024-07-26 12:19:09.585341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:21.824 [2024-07-26 12:19:09.585351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.824 [2024-07-26 12:19:09.585438] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:21.824 [2024-07-26 12:19:09.585452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:21.824 [2024-07-26 12:19:09.585463] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:21.824 [2024-07-26 12:19:09.585474] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:21.824 [2024-07-26 12:19:09.585484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:21.824 [2024-07-26 12:19:09.585494] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:21.824 [2024-07-26 12:19:09.585503] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:21.824 [2024-07-26 12:19:09.585513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:21.824 [2024-07-26 12:19:09.585524] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:21.824 [2024-07-26 12:19:09.585534] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:21.824 [2024-07-26 12:19:09.585543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:21.824 [2024-07-26 12:19:09.585553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:21.824 [2024-07-26 12:19:09.585562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:21.824 [2024-07-26 12:19:09.585589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:21.824 [2024-07-26 12:19:09.585599] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:21.824 [2024-07-26 12:19:09.585621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:21.824 [2024-07-26 12:19:09.585645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:21.824 [2024-07-26 12:19:09.585656] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:21.824 [2024-07-26 12:19:09.585668] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:21.824 [2024-07-26 12:19:09.585678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:21.824 [2024-07-26 12:19:09.585688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:21.824 [2024-07-26 12:19:09.585697] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:21.824 [2024-07-26 12:19:09.585708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:21.824 [2024-07-26 12:19:09.585730] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:21.824 [2024-07-26 12:19:09.585740] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:21.824 [2024-07-26 12:19:09.585749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:21.824 [2024-07-26 12:19:09.585759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:21.824 [2024-07-26 12:19:09.585769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:21.825 [2024-07-26 12:19:09.585778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:21.825 [2024-07-26 12:19:09.585788] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:21.825 [2024-07-26 12:19:09.585798] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:21.825 [2024-07-26 12:19:09.585807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:21.825 [2024-07-26 12:19:09.585816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:21.825 [2024-07-26 12:19:09.585826] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:21.825 [2024-07-26 12:19:09.585836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:21.825 [2024-07-26 12:19:09.585845] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:21.825 [2024-07-26 12:19:09.585855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:21.825 [2024-07-26 12:19:09.585864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:21.825 [2024-07-26 12:19:09.585873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:21.825 [2024-07-26 12:19:09.585882] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:21.825 [2024-07-26 12:19:09.585892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:21.825 [2024-07-26 12:19:09.585901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:21.825 [2024-07-26 12:19:09.585911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:21.825 [2024-07-26 12:19:09.585919] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:21.825 [2024-07-26 12:19:09.585930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:21.825 [2024-07-26 12:19:09.585940] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:21.825 [2024-07-26 12:19:09.585950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:21.825 [2024-07-26 12:19:09.585965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:21.825 [2024-07-26 12:19:09.585975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:21.825 [2024-07-26 12:19:09.585985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:21.825 [2024-07-26 12:19:09.585995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:21.825 [2024-07-26 12:19:09.586005] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:21.825 [2024-07-26 12:19:09.586014] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:21.825 [2024-07-26 12:19:09.586025] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:21.825 [2024-07-26 12:19:09.586039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:21.825 [2024-07-26 12:19:09.586051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:21.825 [2024-07-26 12:19:09.586063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:21.825 [2024-07-26 12:19:09.586073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:21.825 [2024-07-26 12:19:09.586084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:21.825 [2024-07-26 12:19:09.586095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:21.825 [2024-07-26 12:19:09.586105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:21.825 [2024-07-26 12:19:09.586116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:21.825 [2024-07-26 12:19:09.586126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:21.825 [2024-07-26 12:19:09.586147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:21.825 [2024-07-26 12:19:09.586158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:21.825 [2024-07-26 12:19:09.586169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:21.825 [2024-07-26 12:19:09.586180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:21.825 [2024-07-26 12:19:09.586191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:21.825 [2024-07-26 12:19:09.586202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:21.825 [2024-07-26 12:19:09.586213] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:21.825 [2024-07-26 12:19:09.586225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:21.825 [2024-07-26 12:19:09.586236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:21.825 [2024-07-26 12:19:09.586248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:21.825 [2024-07-26 12:19:09.586258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:21.825 [2024-07-26 12:19:09.586269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:21.825 [2024-07-26 12:19:09.586280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.825 [2024-07-26 12:19:09.586291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:21.825 [2024-07-26 12:19:09.586302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:25:21.825 [2024-07-26 12:19:09.586312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.825 [2024-07-26 12:19:09.643805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.825 [2024-07-26 12:19:09.643871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:21.825 [2024-07-26 12:19:09.643889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.529 ms 00:25:21.825 [2024-07-26 12:19:09.643900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.825 [2024-07-26 12:19:09.644010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.825 [2024-07-26 12:19:09.644022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:21.825 [2024-07-26 12:19:09.644039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:21.825 [2024-07-26 12:19:09.644049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.825 [2024-07-26 12:19:09.699888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.825 [2024-07-26 12:19:09.699945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:21.825 [2024-07-26 12:19:09.699962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.795 ms 00:25:21.825 [2024-07-26 12:19:09.699973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.825 [2024-07-26 12:19:09.700036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.825 [2024-07-26 12:19:09.700048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:21.825 [2024-07-26 12:19:09.700060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:21.825 [2024-07-26 12:19:09.700070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.825 [2024-07-26 12:19:09.700587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.825 [2024-07-26 12:19:09.700602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:21.825 [2024-07-26 12:19:09.700614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:25:21.825 [2024-07-26 12:19:09.700624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.825 [2024-07-26 12:19:09.700756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.825 [2024-07-26 12:19:09.700771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:21.825 [2024-07-26 12:19:09.700783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:25:21.825 [2024-07-26 12:19:09.700793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.825 [2024-07-26 12:19:09.722174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.825 [2024-07-26 12:19:09.722239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:21.825 [2024-07-26 12:19:09.722255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.392 ms 00:25:21.825 [2024-07-26 12:19:09.722266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.825 [2024-07-26 12:19:09.744452] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:21.825 [2024-07-26 12:19:09.744535] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:21.825 [2024-07-26 12:19:09.744554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.825 [2024-07-26 12:19:09.744566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:21.825 [2024-07-26 12:19:09.744580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.170 ms 00:25:21.825 [2024-07-26 12:19:09.744590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.825 [2024-07-26 12:19:09.777092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.825 [2024-07-26 12:19:09.777182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:21.825 [2024-07-26 12:19:09.777198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.473 ms 00:25:21.825 [2024-07-26 12:19:09.777210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.826 [2024-07-26 12:19:09.798876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.826 [2024-07-26 12:19:09.798945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:21.826 [2024-07-26 12:19:09.798962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.593 ms 00:25:21.826 [2024-07-26 12:19:09.798974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.085 [2024-07-26 12:19:09.819908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.085 [2024-07-26 12:19:09.819961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:22.085 [2024-07-26 12:19:09.819978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.883 ms 00:25:22.085 [2024-07-26 12:19:09.819988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.085 [2024-07-26 12:19:09.820906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.085 [2024-07-26 12:19:09.820939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:22.085 [2024-07-26 12:19:09.820952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:25:22.085 [2024-07-26 12:19:09.820963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.085 [2024-07-26 12:19:09.916874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.085 [2024-07-26 12:19:09.916950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:22.085 [2024-07-26 12:19:09.916967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.033 ms 00:25:22.085 [2024-07-26 12:19:09.916978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.085 [2024-07-26 12:19:09.932137] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:22.085 [2024-07-26 12:19:09.935530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.085 [2024-07-26 12:19:09.935578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:22.085 [2024-07-26 12:19:09.935593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.499 ms 00:25:22.085 [2024-07-26 12:19:09.935604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.085 [2024-07-26 12:19:09.935724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.085 [2024-07-26 12:19:09.935742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:22.085 [2024-07-26 12:19:09.935754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:22.085 [2024-07-26 12:19:09.935764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.085 [2024-07-26 12:19:09.935836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.085 [2024-07-26 12:19:09.935848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:22.085 [2024-07-26 12:19:09.935859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:22.085 [2024-07-26 12:19:09.935869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.085 [2024-07-26 12:19:09.935890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.085 [2024-07-26 12:19:09.935900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:22.085 [2024-07-26 12:19:09.935914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:22.085 [2024-07-26 12:19:09.935924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.085 [2024-07-26 12:19:09.935956] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:22.085 [2024-07-26 12:19:09.935968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.085 [2024-07-26 12:19:09.935978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:22.085 [2024-07-26 12:19:09.935988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:22.085 [2024-07-26 12:19:09.935998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.085 [2024-07-26 12:19:09.978460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.085 [2024-07-26 12:19:09.978541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:22.085 [2024-07-26 12:19:09.978558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.508 ms 00:25:22.085 [2024-07-26 12:19:09.978568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.085 [2024-07-26 12:19:09.978673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:22.085 [2024-07-26 12:19:09.978685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:22.085 [2024-07-26 12:19:09.978696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:22.085 [2024-07-26 12:19:09.978706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:22.085 [2024-07-26 12:19:09.979868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 434.052 ms, result 0 00:25:57.380  Copying: 32/1024 [MB] (32 MBps) Copying: 63/1024 [MB] (30 MBps) Copying: 93/1024 [MB] (29 MBps) Copying: 121/1024 [MB] (28 MBps) Copying: 151/1024 [MB] (30 MBps) Copying: 182/1024 [MB] (30 MBps) Copying: 212/1024 [MB] (29 MBps) Copying: 241/1024 [MB] (29 MBps) Copying: 272/1024 [MB] (30 MBps) Copying: 303/1024 [MB] (30 MBps) Copying: 333/1024 [MB] (30 MBps) Copying: 363/1024 [MB] (29 MBps) Copying: 393/1024 [MB] (29 MBps) Copying: 423/1024 [MB] (29 MBps) Copying: 452/1024 [MB] (28 MBps) Copying: 482/1024 [MB] (29 MBps) Copying: 513/1024 [MB] (30 MBps) Copying: 542/1024 [MB] (29 MBps) Copying: 571/1024 [MB] (29 MBps) Copying: 600/1024 [MB] (28 MBps) Copying: 629/1024 [MB] (29 MBps) Copying: 657/1024 [MB] (27 MBps) Copying: 686/1024 [MB] (28 MBps) Copying: 715/1024 [MB] (29 MBps) Copying: 744/1024 [MB] (28 MBps) Copying: 773/1024 [MB] (28 MBps) Copying: 801/1024 [MB] (28 MBps) Copying: 831/1024 [MB] (29 MBps) Copying: 859/1024 [MB] (28 MBps) Copying: 888/1024 [MB] (28 MBps) Copying: 917/1024 [MB] (29 MBps) Copying: 946/1024 [MB] (28 MBps) Copying: 976/1024 [MB] (30 MBps) Copying: 1006/1024 [MB] (29 MBps) Copying: 1023/1024 [MB] (17 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-26 12:19:45.317984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.380 [2024-07-26 12:19:45.318217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:57.380 [2024-07-26 12:19:45.318395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:57.380 [2024-07-26 12:19:45.318436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.380 [2024-07-26 12:19:45.320434] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:57.380 [2024-07-26 12:19:45.324941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.380 [2024-07-26 12:19:45.325079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:57.380 [2024-07-26 12:19:45.325206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.368 ms 00:25:57.380 [2024-07-26 12:19:45.325244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.380 [2024-07-26 12:19:45.335414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.380 [2024-07-26 12:19:45.335565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:57.380 [2024-07-26 12:19:45.335692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.014 ms 00:25:57.380 [2024-07-26 12:19:45.335730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.639 [2024-07-26 12:19:45.360132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.639 [2024-07-26 12:19:45.360320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:57.639 [2024-07-26 12:19:45.360421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.397 ms 00:25:57.639 [2024-07-26 12:19:45.360460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.639 [2024-07-26 12:19:45.365640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.639 [2024-07-26 12:19:45.365769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:57.639 [2024-07-26 12:19:45.365863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.129 ms 00:25:57.639 [2024-07-26 12:19:45.365898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.639 [2024-07-26 12:19:45.403181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.639 [2024-07-26 12:19:45.403359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:57.639 [2024-07-26 12:19:45.403431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.273 ms 00:25:57.639 [2024-07-26 12:19:45.403466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.639 [2024-07-26 12:19:45.425134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.639 [2024-07-26 12:19:45.425288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:57.639 [2024-07-26 12:19:45.425360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.644 ms 00:25:57.639 [2024-07-26 12:19:45.425394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.640 [2024-07-26 12:19:45.536317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.640 [2024-07-26 12:19:45.536562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:57.640 [2024-07-26 12:19:45.536652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.036 ms 00:25:57.640 [2024-07-26 12:19:45.536701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.640 [2024-07-26 12:19:45.574633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.640 [2024-07-26 12:19:45.574683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:57.640 [2024-07-26 12:19:45.574699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.901 ms 00:25:57.640 [2024-07-26 12:19:45.574709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.640 [2024-07-26 12:19:45.612247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.640 [2024-07-26 12:19:45.612300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:57.640 [2024-07-26 12:19:45.612316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.555 ms 00:25:57.640 [2024-07-26 12:19:45.612326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.899 [2024-07-26 12:19:45.651609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.899 [2024-07-26 12:19:45.651662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:57.899 [2024-07-26 12:19:45.651678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.298 ms 00:25:57.899 [2024-07-26 12:19:45.651688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.899 [2024-07-26 12:19:45.691020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.899 [2024-07-26 12:19:45.691072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:57.899 [2024-07-26 12:19:45.691088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.305 ms 00:25:57.899 [2024-07-26 12:19:45.691098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.899 [2024-07-26 12:19:45.691156] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:57.899 [2024-07-26 12:19:45.691174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 106240 / 261120 wr_cnt: 1 state: open 00:25:57.899 [2024-07-26 12:19:45.691188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:57.899 [2024-07-26 12:19:45.691533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.691990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:57.900 [2024-07-26 12:19:45.692261] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:57.900 [2024-07-26 12:19:45.692272] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8e1e333-1a42-4b51-b148-ec7db2a58227 00:25:57.900 [2024-07-26 12:19:45.692288] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 106240 00:25:57.900 [2024-07-26 12:19:45.692298] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107200 00:25:57.900 [2024-07-26 12:19:45.692311] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 106240 00:25:57.900 [2024-07-26 12:19:45.692322] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:25:57.900 [2024-07-26 12:19:45.692332] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:57.900 [2024-07-26 12:19:45.692343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:57.900 [2024-07-26 12:19:45.692352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:57.900 [2024-07-26 12:19:45.692362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:57.900 [2024-07-26 12:19:45.692371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:57.900 [2024-07-26 12:19:45.692381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.900 [2024-07-26 12:19:45.692391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:57.900 [2024-07-26 12:19:45.692413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:25:57.900 [2024-07-26 12:19:45.692423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.900 [2024-07-26 12:19:45.713101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.900 [2024-07-26 12:19:45.713150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:57.900 [2024-07-26 12:19:45.713164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.673 ms 00:25:57.900 [2024-07-26 12:19:45.713174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.900 [2024-07-26 12:19:45.713662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.900 [2024-07-26 12:19:45.713673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:57.900 [2024-07-26 12:19:45.713683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:25:57.900 [2024-07-26 12:19:45.713693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.900 [2024-07-26 12:19:45.759390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.900 [2024-07-26 12:19:45.759634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.900 [2024-07-26 12:19:45.759803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.900 [2024-07-26 12:19:45.759841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.900 [2024-07-26 12:19:45.759935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.901 [2024-07-26 12:19:45.759968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.901 [2024-07-26 12:19:45.759997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.901 [2024-07-26 12:19:45.760062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.901 [2024-07-26 12:19:45.760240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.901 [2024-07-26 12:19:45.760283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.901 [2024-07-26 12:19:45.760314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.901 [2024-07-26 12:19:45.760342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.901 [2024-07-26 12:19:45.760380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.901 [2024-07-26 12:19:45.760540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.901 [2024-07-26 12:19:45.760588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.901 [2024-07-26 12:19:45.760618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.160 [2024-07-26 12:19:45.880510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.160 [2024-07-26 12:19:45.880751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:58.160 [2024-07-26 12:19:45.880774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.160 [2024-07-26 12:19:45.880785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.160 [2024-07-26 12:19:45.986055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.160 [2024-07-26 12:19:45.986320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:58.160 [2024-07-26 12:19:45.986425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.160 [2024-07-26 12:19:45.986462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.160 [2024-07-26 12:19:45.986578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.160 [2024-07-26 12:19:45.986626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:58.160 [2024-07-26 12:19:45.986659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.160 [2024-07-26 12:19:45.986749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.161 [2024-07-26 12:19:45.986833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.161 [2024-07-26 12:19:45.986867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:58.161 [2024-07-26 12:19:45.986897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.161 [2024-07-26 12:19:45.986926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.161 [2024-07-26 12:19:45.987052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.161 [2024-07-26 12:19:45.987095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:58.161 [2024-07-26 12:19:45.987139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.161 [2024-07-26 12:19:45.987174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.161 [2024-07-26 12:19:45.987238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.161 [2024-07-26 12:19:45.987273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:58.161 [2024-07-26 12:19:45.987367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.161 [2024-07-26 12:19:45.987396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.161 [2024-07-26 12:19:45.987451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.161 [2024-07-26 12:19:45.987482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:58.161 [2024-07-26 12:19:45.987582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.161 [2024-07-26 12:19:45.987662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.161 [2024-07-26 12:19:45.987737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:58.161 [2024-07-26 12:19:45.987809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:58.161 [2024-07-26 12:19:45.987844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:58.161 [2024-07-26 12:19:45.987874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:58.161 [2024-07-26 12:19:45.988061] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 672.345 ms, result 0 00:26:00.067 00:26:00.067 00:26:00.067 12:19:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:01.965 12:19:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:01.965 [2024-07-26 12:19:49.709217] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:26:01.965 [2024-07-26 12:19:49.709536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83422 ] 00:26:01.965 [2024-07-26 12:19:49.879584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.223 [2024-07-26 12:19:50.133111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.791 [2024-07-26 12:19:50.537231] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:02.791 [2024-07-26 12:19:50.537458] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:02.791 [2024-07-26 12:19:50.698869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.791 [2024-07-26 12:19:50.699102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:02.791 [2024-07-26 12:19:50.699295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:02.791 [2024-07-26 12:19:50.699335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.791 [2024-07-26 12:19:50.699423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.791 [2024-07-26 12:19:50.699580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:02.791 [2024-07-26 12:19:50.699595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:02.791 [2024-07-26 12:19:50.699609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.791 [2024-07-26 12:19:50.699638] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:02.791 [2024-07-26 12:19:50.700816] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:02.791 [2024-07-26 12:19:50.700853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.791 [2024-07-26 12:19:50.700864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:02.791 [2024-07-26 12:19:50.700875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.225 ms 00:26:02.791 [2024-07-26 12:19:50.700884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.791 [2024-07-26 12:19:50.702335] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:02.791 [2024-07-26 12:19:50.723182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.791 [2024-07-26 12:19:50.723225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:02.791 [2024-07-26 12:19:50.723240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.882 ms 00:26:02.791 [2024-07-26 12:19:50.723250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.791 [2024-07-26 12:19:50.723316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.791 [2024-07-26 12:19:50.723331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:02.791 [2024-07-26 12:19:50.723342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:02.791 [2024-07-26 12:19:50.723352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.791 [2024-07-26 12:19:50.730224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.791 [2024-07-26 12:19:50.730254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:02.791 [2024-07-26 12:19:50.730267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.808 ms 00:26:02.791 [2024-07-26 12:19:50.730277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.791 [2024-07-26 12:19:50.730361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.791 [2024-07-26 12:19:50.730375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:02.791 [2024-07-26 12:19:50.730386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:02.791 [2024-07-26 12:19:50.730396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.791 [2024-07-26 12:19:50.730444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.791 [2024-07-26 12:19:50.730456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:02.791 [2024-07-26 12:19:50.730467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:02.791 [2024-07-26 12:19:50.730477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.791 [2024-07-26 12:19:50.730503] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:02.791 [2024-07-26 12:19:50.736261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.791 [2024-07-26 12:19:50.736296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:02.791 [2024-07-26 12:19:50.736309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.774 ms 00:26:02.791 [2024-07-26 12:19:50.736319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.791 [2024-07-26 12:19:50.736355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.791 [2024-07-26 12:19:50.736367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:02.791 [2024-07-26 12:19:50.736377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:02.791 [2024-07-26 12:19:50.736387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.791 [2024-07-26 12:19:50.736444] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:02.791 [2024-07-26 12:19:50.736468] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:02.791 [2024-07-26 12:19:50.736504] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:02.791 [2024-07-26 12:19:50.736523] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:02.791 [2024-07-26 12:19:50.736607] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:02.791 [2024-07-26 12:19:50.736621] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:02.791 [2024-07-26 12:19:50.736633] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:02.791 [2024-07-26 12:19:50.736646] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:02.791 [2024-07-26 12:19:50.736659] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:02.791 [2024-07-26 12:19:50.736671] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:02.791 [2024-07-26 12:19:50.736681] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:02.791 [2024-07-26 12:19:50.736691] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:02.791 [2024-07-26 12:19:50.736701] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:02.791 [2024-07-26 12:19:50.736712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.791 [2024-07-26 12:19:50.736725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:02.791 [2024-07-26 12:19:50.736735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:26:02.791 [2024-07-26 12:19:50.736745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.791 [2024-07-26 12:19:50.736811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.791 [2024-07-26 12:19:50.736821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:02.791 [2024-07-26 12:19:50.736832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:02.791 [2024-07-26 12:19:50.736841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.791 [2024-07-26 12:19:50.736921] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:02.791 [2024-07-26 12:19:50.736933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:02.791 [2024-07-26 12:19:50.736947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:02.791 [2024-07-26 12:19:50.736957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.791 [2024-07-26 12:19:50.736967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:02.791 [2024-07-26 12:19:50.736977] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:02.791 [2024-07-26 12:19:50.736986] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:02.791 [2024-07-26 12:19:50.736996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:02.791 [2024-07-26 12:19:50.737005] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:02.791 [2024-07-26 12:19:50.737014] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:02.791 [2024-07-26 12:19:50.737025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:02.791 [2024-07-26 12:19:50.737034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:02.791 [2024-07-26 12:19:50.737043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:02.791 [2024-07-26 12:19:50.737052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:02.791 [2024-07-26 12:19:50.737061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:02.791 [2024-07-26 12:19:50.737071] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.791 [2024-07-26 12:19:50.737080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:02.791 [2024-07-26 12:19:50.737090] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:02.791 [2024-07-26 12:19:50.737099] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.791 [2024-07-26 12:19:50.737108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:02.791 [2024-07-26 12:19:50.737148] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:02.791 [2024-07-26 12:19:50.737158] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:02.791 [2024-07-26 12:19:50.737179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:02.791 [2024-07-26 12:19:50.737189] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:02.791 [2024-07-26 12:19:50.737198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:02.791 [2024-07-26 12:19:50.737206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:02.791 [2024-07-26 12:19:50.737216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:02.792 [2024-07-26 12:19:50.737225] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:02.792 [2024-07-26 12:19:50.737234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:02.792 [2024-07-26 12:19:50.737243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:02.792 [2024-07-26 12:19:50.737252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:02.792 [2024-07-26 12:19:50.737261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:02.792 [2024-07-26 12:19:50.737270] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:02.792 [2024-07-26 12:19:50.737279] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:02.792 [2024-07-26 12:19:50.737288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:02.792 [2024-07-26 12:19:50.737298] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:02.792 [2024-07-26 12:19:50.737307] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:02.792 [2024-07-26 12:19:50.737316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:02.792 [2024-07-26 12:19:50.737331] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:02.792 [2024-07-26 12:19:50.737340] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.792 [2024-07-26 12:19:50.737348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:02.792 [2024-07-26 12:19:50.737357] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:02.792 [2024-07-26 12:19:50.737367] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.792 [2024-07-26 12:19:50.737375] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:02.792 [2024-07-26 12:19:50.737385] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:02.792 [2024-07-26 12:19:50.737395] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:02.792 [2024-07-26 12:19:50.737404] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:02.792 [2024-07-26 12:19:50.737414] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:02.792 [2024-07-26 12:19:50.737424] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:02.792 [2024-07-26 12:19:50.737433] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:02.792 [2024-07-26 12:19:50.737442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:02.792 [2024-07-26 12:19:50.737451] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:02.792 [2024-07-26 12:19:50.737460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:02.792 [2024-07-26 12:19:50.737470] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:02.792 [2024-07-26 12:19:50.737481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:02.792 [2024-07-26 12:19:50.737492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:02.792 [2024-07-26 12:19:50.737503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:02.792 [2024-07-26 12:19:50.737513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:02.792 [2024-07-26 12:19:50.737523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:02.792 [2024-07-26 12:19:50.737533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:02.792 [2024-07-26 12:19:50.737543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:02.792 [2024-07-26 12:19:50.737553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:02.792 [2024-07-26 12:19:50.737563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:02.792 [2024-07-26 12:19:50.737574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:02.792 [2024-07-26 12:19:50.737584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:02.792 [2024-07-26 12:19:50.737594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:02.792 [2024-07-26 12:19:50.737612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:02.792 [2024-07-26 12:19:50.737623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:02.792 [2024-07-26 12:19:50.737633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:02.792 [2024-07-26 12:19:50.737643] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:02.792 [2024-07-26 12:19:50.737654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:02.792 [2024-07-26 12:19:50.737669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:02.792 [2024-07-26 12:19:50.737679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:02.792 [2024-07-26 12:19:50.737689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:02.792 [2024-07-26 12:19:50.737701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:02.792 [2024-07-26 12:19:50.737713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.792 [2024-07-26 12:19:50.737723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:02.792 [2024-07-26 12:19:50.737733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:26:02.792 [2024-07-26 12:19:50.737742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.051 [2024-07-26 12:19:50.791926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.051 [2024-07-26 12:19:50.791988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:03.051 [2024-07-26 12:19:50.792003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.215 ms 00:26:03.051 [2024-07-26 12:19:50.792014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.051 [2024-07-26 12:19:50.792108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.051 [2024-07-26 12:19:50.792135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:03.051 [2024-07-26 12:19:50.792147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:03.051 [2024-07-26 12:19:50.792157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.051 [2024-07-26 12:19:50.843375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.051 [2024-07-26 12:19:50.843424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:03.051 [2024-07-26 12:19:50.843439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.207 ms 00:26:03.051 [2024-07-26 12:19:50.843449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.051 [2024-07-26 12:19:50.843501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.051 [2024-07-26 12:19:50.843513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:03.051 [2024-07-26 12:19:50.843523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:03.051 [2024-07-26 12:19:50.843538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.051 [2024-07-26 12:19:50.844017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.051 [2024-07-26 12:19:50.844031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:03.051 [2024-07-26 12:19:50.844042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:26:03.051 [2024-07-26 12:19:50.844052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.051 [2024-07-26 12:19:50.844190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.051 [2024-07-26 12:19:50.844205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:03.051 [2024-07-26 12:19:50.844216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:26:03.051 [2024-07-26 12:19:50.844226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.051 [2024-07-26 12:19:50.864701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.051 [2024-07-26 12:19:50.864746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:03.051 [2024-07-26 12:19:50.864761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.482 ms 00:26:03.051 [2024-07-26 12:19:50.864775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.051 [2024-07-26 12:19:50.884323] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:03.051 [2024-07-26 12:19:50.884368] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:03.051 [2024-07-26 12:19:50.884384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.051 [2024-07-26 12:19:50.884395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:03.051 [2024-07-26 12:19:50.884408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.511 ms 00:26:03.051 [2024-07-26 12:19:50.884417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.051 [2024-07-26 12:19:50.913998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.051 [2024-07-26 12:19:50.914051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:03.051 [2024-07-26 12:19:50.914067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.578 ms 00:26:03.051 [2024-07-26 12:19:50.914077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.051 [2024-07-26 12:19:50.932961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.051 [2024-07-26 12:19:50.933005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:03.051 [2024-07-26 12:19:50.933020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.846 ms 00:26:03.051 [2024-07-26 12:19:50.933030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.051 [2024-07-26 12:19:50.952364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.051 [2024-07-26 12:19:50.952408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:03.051 [2024-07-26 12:19:50.952422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.320 ms 00:26:03.051 [2024-07-26 12:19:50.952432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.051 [2024-07-26 12:19:50.953333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.051 [2024-07-26 12:19:50.953364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:03.051 [2024-07-26 12:19:50.953377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:26:03.051 [2024-07-26 12:19:50.953387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.309 [2024-07-26 12:19:51.043646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.309 [2024-07-26 12:19:51.043715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:03.309 [2024-07-26 12:19:51.043731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.377 ms 00:26:03.309 [2024-07-26 12:19:51.043748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.309 [2024-07-26 12:19:51.056328] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:03.309 [2024-07-26 12:19:51.059481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.309 [2024-07-26 12:19:51.059532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:03.309 [2024-07-26 12:19:51.059555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.692 ms 00:26:03.309 [2024-07-26 12:19:51.059573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.309 [2024-07-26 12:19:51.059734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.309 [2024-07-26 12:19:51.059755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:03.309 [2024-07-26 12:19:51.059773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:03.309 [2024-07-26 12:19:51.059789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.309 [2024-07-26 12:19:51.061369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.309 [2024-07-26 12:19:51.061408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:03.309 [2024-07-26 12:19:51.061421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.509 ms 00:26:03.309 [2024-07-26 12:19:51.061431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.309 [2024-07-26 12:19:51.061469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.309 [2024-07-26 12:19:51.061480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:03.309 [2024-07-26 12:19:51.061491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:03.309 [2024-07-26 12:19:51.061500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.309 [2024-07-26 12:19:51.061534] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:03.309 [2024-07-26 12:19:51.061546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.309 [2024-07-26 12:19:51.061559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:03.309 [2024-07-26 12:19:51.061570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:03.309 [2024-07-26 12:19:51.061580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.309 [2024-07-26 12:19:51.099268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.309 [2024-07-26 12:19:51.099312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:03.309 [2024-07-26 12:19:51.099327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.729 ms 00:26:03.309 [2024-07-26 12:19:51.099344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.309 [2024-07-26 12:19:51.099419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.309 [2024-07-26 12:19:51.099431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:03.309 [2024-07-26 12:19:51.099442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:03.309 [2024-07-26 12:19:51.099452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.309 [2024-07-26 12:19:51.105988] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 404.831 ms, result 0 00:26:33.054  Copying: 1156/1048576 [kB] (1156 kBps) Copying: 6080/1048576 [kB] (4924 kBps) Copying: 41/1024 [MB] (35 MBps) Copying: 78/1024 [MB] (36 MBps) Copying: 115/1024 [MB] (37 MBps) Copying: 153/1024 [MB] (38 MBps) Copying: 192/1024 [MB] (38 MBps) Copying: 230/1024 [MB] (38 MBps) Copying: 268/1024 [MB] (37 MBps) Copying: 304/1024 [MB] (36 MBps) Copying: 341/1024 [MB] (36 MBps) Copying: 377/1024 [MB] (35 MBps) Copying: 415/1024 [MB] (37 MBps) Copying: 453/1024 [MB] (37 MBps) Copying: 490/1024 [MB] (37 MBps) Copying: 529/1024 [MB] (38 MBps) Copying: 567/1024 [MB] (37 MBps) Copying: 604/1024 [MB] (37 MBps) Copying: 642/1024 [MB] (37 MBps) Copying: 679/1024 [MB] (37 MBps) Copying: 716/1024 [MB] (37 MBps) Copying: 753/1024 [MB] (36 MBps) Copying: 790/1024 [MB] (36 MBps) Copying: 826/1024 [MB] (36 MBps) Copying: 858/1024 [MB] (31 MBps) Copying: 896/1024 [MB] (37 MBps) Copying: 933/1024 [MB] (37 MBps) Copying: 970/1024 [MB] (36 MBps) Copying: 1006/1024 [MB] (36 MBps) Copying: 1024/1024 [MB] (average 34 MBps)[2024-07-26 12:20:20.908246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.054 [2024-07-26 12:20:20.908320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:33.054 [2024-07-26 12:20:20.908356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:33.054 [2024-07-26 12:20:20.908371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.054 [2024-07-26 12:20:20.908401] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:33.054 [2024-07-26 12:20:20.912485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.054 [2024-07-26 12:20:20.912525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:33.054 [2024-07-26 12:20:20.912540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.068 ms 00:26:33.054 [2024-07-26 12:20:20.912550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.054 [2024-07-26 12:20:20.912761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.054 [2024-07-26 12:20:20.912773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:33.054 [2024-07-26 12:20:20.912791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:26:33.054 [2024-07-26 12:20:20.912801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.054 [2024-07-26 12:20:20.925361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.054 [2024-07-26 12:20:20.925456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:33.054 [2024-07-26 12:20:20.925473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.555 ms 00:26:33.054 [2024-07-26 12:20:20.925484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.054 [2024-07-26 12:20:20.930767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.054 [2024-07-26 12:20:20.930822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:33.054 [2024-07-26 12:20:20.930836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.247 ms 00:26:33.054 [2024-07-26 12:20:20.930859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.054 [2024-07-26 12:20:20.974055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.054 [2024-07-26 12:20:20.974138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:33.054 [2024-07-26 12:20:20.974156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.186 ms 00:26:33.054 [2024-07-26 12:20:20.974167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.054 [2024-07-26 12:20:20.997196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.054 [2024-07-26 12:20:20.997268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:33.054 [2024-07-26 12:20:20.997287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.996 ms 00:26:33.054 [2024-07-26 12:20:20.997298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.054 [2024-07-26 12:20:21.000312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.054 [2024-07-26 12:20:21.000360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:33.054 [2024-07-26 12:20:21.000375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.944 ms 00:26:33.054 [2024-07-26 12:20:21.000386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.314 [2024-07-26 12:20:21.043834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.314 [2024-07-26 12:20:21.043895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:33.314 [2024-07-26 12:20:21.043912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.495 ms 00:26:33.314 [2024-07-26 12:20:21.043922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.314 [2024-07-26 12:20:21.086888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.314 [2024-07-26 12:20:21.086950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:33.314 [2024-07-26 12:20:21.086966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.963 ms 00:26:33.314 [2024-07-26 12:20:21.086976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.314 [2024-07-26 12:20:21.128480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.314 [2024-07-26 12:20:21.128538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:33.314 [2024-07-26 12:20:21.128554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.505 ms 00:26:33.314 [2024-07-26 12:20:21.128579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.314 [2024-07-26 12:20:21.171618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.314 [2024-07-26 12:20:21.171689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:33.314 [2024-07-26 12:20:21.171707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.969 ms 00:26:33.314 [2024-07-26 12:20:21.171718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.314 [2024-07-26 12:20:21.171787] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:33.314 [2024-07-26 12:20:21.171809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:33.314 [2024-07-26 12:20:21.171824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:26:33.314 [2024-07-26 12:20:21.171837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.171848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.171860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.171871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.171883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.171894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.171905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.171916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.171928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.171939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.171950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.171961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.171973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.171984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.171995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:33.314 [2024-07-26 12:20:21.172379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:33.315 [2024-07-26 12:20:21.172985] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:33.315 [2024-07-26 12:20:21.172996] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8e1e333-1a42-4b51-b148-ec7db2a58227 00:26:33.315 [2024-07-26 12:20:21.173008] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:26:33.315 [2024-07-26 12:20:21.173022] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 160448 00:26:33.315 [2024-07-26 12:20:21.173032] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 158464 00:26:33.315 [2024-07-26 12:20:21.173043] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:26:33.315 [2024-07-26 12:20:21.173057] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:33.315 [2024-07-26 12:20:21.173068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:33.315 [2024-07-26 12:20:21.173078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:33.315 [2024-07-26 12:20:21.173104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:33.315 [2024-07-26 12:20:21.173116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:33.315 [2024-07-26 12:20:21.173127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.315 [2024-07-26 12:20:21.173146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:33.315 [2024-07-26 12:20:21.173158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.343 ms 00:26:33.315 [2024-07-26 12:20:21.173169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.315 [2024-07-26 12:20:21.195046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.315 [2024-07-26 12:20:21.195100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:33.315 [2024-07-26 12:20:21.195147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.858 ms 00:26:33.315 [2024-07-26 12:20:21.195170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.315 [2024-07-26 12:20:21.195715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.315 [2024-07-26 12:20:21.195733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:33.315 [2024-07-26 12:20:21.195745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:26:33.315 [2024-07-26 12:20:21.195755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.315 [2024-07-26 12:20:21.242714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.315 [2024-07-26 12:20:21.242775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:33.315 [2024-07-26 12:20:21.242790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.315 [2024-07-26 12:20:21.242800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.315 [2024-07-26 12:20:21.242871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.315 [2024-07-26 12:20:21.242882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:33.315 [2024-07-26 12:20:21.242893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.315 [2024-07-26 12:20:21.242902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.315 [2024-07-26 12:20:21.243005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.315 [2024-07-26 12:20:21.243023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:33.315 [2024-07-26 12:20:21.243033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.315 [2024-07-26 12:20:21.243043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.315 [2024-07-26 12:20:21.243060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.315 [2024-07-26 12:20:21.243070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:33.315 [2024-07-26 12:20:21.243080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.315 [2024-07-26 12:20:21.243090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.575 [2024-07-26 12:20:21.365045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.575 [2024-07-26 12:20:21.365116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:33.575 [2024-07-26 12:20:21.365150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.575 [2024-07-26 12:20:21.365161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.575 [2024-07-26 12:20:21.472504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.575 [2024-07-26 12:20:21.472570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:33.575 [2024-07-26 12:20:21.472587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.575 [2024-07-26 12:20:21.472597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.575 [2024-07-26 12:20:21.472699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.575 [2024-07-26 12:20:21.472711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:33.575 [2024-07-26 12:20:21.472725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.575 [2024-07-26 12:20:21.472735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.575 [2024-07-26 12:20:21.472780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.575 [2024-07-26 12:20:21.472791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:33.575 [2024-07-26 12:20:21.472801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.575 [2024-07-26 12:20:21.472811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.575 [2024-07-26 12:20:21.472924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.575 [2024-07-26 12:20:21.472938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:33.575 [2024-07-26 12:20:21.472949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.575 [2024-07-26 12:20:21.472963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.575 [2024-07-26 12:20:21.473002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.575 [2024-07-26 12:20:21.473013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:33.575 [2024-07-26 12:20:21.473023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.575 [2024-07-26 12:20:21.473033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.575 [2024-07-26 12:20:21.473072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.575 [2024-07-26 12:20:21.473082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:33.575 [2024-07-26 12:20:21.473092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.575 [2024-07-26 12:20:21.473101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.575 [2024-07-26 12:20:21.473178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.575 [2024-07-26 12:20:21.473190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:33.575 [2024-07-26 12:20:21.473201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.575 [2024-07-26 12:20:21.473210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.575 [2024-07-26 12:20:21.473326] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 565.973 ms, result 0 00:26:34.952 00:26:34.952 00:26:34.952 12:20:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:36.855 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:36.855 12:20:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:36.855 [2024-07-26 12:20:24.570950] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:26:36.855 [2024-07-26 12:20:24.571185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83770 ] 00:26:36.855 [2024-07-26 12:20:24.761340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.132 [2024-07-26 12:20:24.997088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.701 [2024-07-26 12:20:25.402871] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:37.701 [2024-07-26 12:20:25.402952] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:37.701 [2024-07-26 12:20:25.564918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.701 [2024-07-26 12:20:25.564988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:37.701 [2024-07-26 12:20:25.565005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:37.701 [2024-07-26 12:20:25.565015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.701 [2024-07-26 12:20:25.565094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.701 [2024-07-26 12:20:25.565107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:37.701 [2024-07-26 12:20:25.565119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:37.701 [2024-07-26 12:20:25.565132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.701 [2024-07-26 12:20:25.565185] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:37.701 [2024-07-26 12:20:25.566411] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:37.701 [2024-07-26 12:20:25.566446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.701 [2024-07-26 12:20:25.566457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:37.701 [2024-07-26 12:20:25.566469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:26:37.701 [2024-07-26 12:20:25.566480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.701 [2024-07-26 12:20:25.567968] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:37.701 [2024-07-26 12:20:25.590095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.701 [2024-07-26 12:20:25.590405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:37.701 [2024-07-26 12:20:25.590433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.158 ms 00:26:37.701 [2024-07-26 12:20:25.590445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.701 [2024-07-26 12:20:25.590551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.701 [2024-07-26 12:20:25.590569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:37.701 [2024-07-26 12:20:25.590581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:37.701 [2024-07-26 12:20:25.590592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.701 [2024-07-26 12:20:25.598581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.701 [2024-07-26 12:20:25.598629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:37.701 [2024-07-26 12:20:25.598645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.890 ms 00:26:37.701 [2024-07-26 12:20:25.598656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.701 [2024-07-26 12:20:25.598764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.701 [2024-07-26 12:20:25.598782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:37.701 [2024-07-26 12:20:25.598793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:26:37.701 [2024-07-26 12:20:25.598804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.701 [2024-07-26 12:20:25.598867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.701 [2024-07-26 12:20:25.598880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:37.701 [2024-07-26 12:20:25.598891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:37.701 [2024-07-26 12:20:25.598902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.701 [2024-07-26 12:20:25.598931] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:37.701 [2024-07-26 12:20:25.605368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.701 [2024-07-26 12:20:25.605441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:37.701 [2024-07-26 12:20:25.605457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.454 ms 00:26:37.701 [2024-07-26 12:20:25.605469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.702 [2024-07-26 12:20:25.605524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.702 [2024-07-26 12:20:25.605537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:37.702 [2024-07-26 12:20:25.605548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:37.702 [2024-07-26 12:20:25.605559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.702 [2024-07-26 12:20:25.605648] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:37.702 [2024-07-26 12:20:25.605676] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:37.702 [2024-07-26 12:20:25.605715] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:37.702 [2024-07-26 12:20:25.605737] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:37.702 [2024-07-26 12:20:25.605830] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:37.702 [2024-07-26 12:20:25.605844] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:37.702 [2024-07-26 12:20:25.605858] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:37.702 [2024-07-26 12:20:25.605872] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:37.702 [2024-07-26 12:20:25.605885] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:37.702 [2024-07-26 12:20:25.605897] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:37.702 [2024-07-26 12:20:25.605908] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:37.702 [2024-07-26 12:20:25.605919] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:37.702 [2024-07-26 12:20:25.605929] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:37.702 [2024-07-26 12:20:25.605940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.702 [2024-07-26 12:20:25.605954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:37.702 [2024-07-26 12:20:25.605965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:26:37.702 [2024-07-26 12:20:25.605976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.702 [2024-07-26 12:20:25.606054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.702 [2024-07-26 12:20:25.606065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:37.702 [2024-07-26 12:20:25.606077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:37.702 [2024-07-26 12:20:25.606087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.702 [2024-07-26 12:20:25.606198] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:37.702 [2024-07-26 12:20:25.606213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:37.702 [2024-07-26 12:20:25.606228] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:37.702 [2024-07-26 12:20:25.606240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.702 [2024-07-26 12:20:25.606251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:37.702 [2024-07-26 12:20:25.606261] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:37.702 [2024-07-26 12:20:25.606271] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:37.702 [2024-07-26 12:20:25.606282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:37.702 [2024-07-26 12:20:25.606292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:37.702 [2024-07-26 12:20:25.606302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:37.702 [2024-07-26 12:20:25.606312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:37.702 [2024-07-26 12:20:25.606322] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:37.702 [2024-07-26 12:20:25.606332] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:37.702 [2024-07-26 12:20:25.606344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:37.702 [2024-07-26 12:20:25.606354] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:37.702 [2024-07-26 12:20:25.606364] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.702 [2024-07-26 12:20:25.606373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:37.702 [2024-07-26 12:20:25.606383] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:37.702 [2024-07-26 12:20:25.606393] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.702 [2024-07-26 12:20:25.606403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:37.702 [2024-07-26 12:20:25.606426] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:37.702 [2024-07-26 12:20:25.606436] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.702 [2024-07-26 12:20:25.606446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:37.702 [2024-07-26 12:20:25.606456] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:37.702 [2024-07-26 12:20:25.606465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.702 [2024-07-26 12:20:25.606475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:37.702 [2024-07-26 12:20:25.606485] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:37.702 [2024-07-26 12:20:25.606495] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.702 [2024-07-26 12:20:25.606504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:37.702 [2024-07-26 12:20:25.606514] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:37.702 [2024-07-26 12:20:25.606524] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.702 [2024-07-26 12:20:25.606533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:37.702 [2024-07-26 12:20:25.606543] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:37.702 [2024-07-26 12:20:25.606552] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:37.702 [2024-07-26 12:20:25.606562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:37.702 [2024-07-26 12:20:25.606572] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:37.702 [2024-07-26 12:20:25.606581] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:37.702 [2024-07-26 12:20:25.606591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:37.702 [2024-07-26 12:20:25.606601] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:37.702 [2024-07-26 12:20:25.606610] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.702 [2024-07-26 12:20:25.606620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:37.702 [2024-07-26 12:20:25.606629] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:37.702 [2024-07-26 12:20:25.606639] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.702 [2024-07-26 12:20:25.606648] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:37.702 [2024-07-26 12:20:25.606658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:37.702 [2024-07-26 12:20:25.606669] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:37.703 [2024-07-26 12:20:25.606679] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.703 [2024-07-26 12:20:25.606690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:37.703 [2024-07-26 12:20:25.606700] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:37.703 [2024-07-26 12:20:25.606710] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:37.703 [2024-07-26 12:20:25.606719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:37.703 [2024-07-26 12:20:25.606729] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:37.703 [2024-07-26 12:20:25.606739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:37.703 [2024-07-26 12:20:25.606750] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:37.703 [2024-07-26 12:20:25.606764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:37.703 [2024-07-26 12:20:25.606776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:37.703 [2024-07-26 12:20:25.606787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:37.703 [2024-07-26 12:20:25.606798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:37.703 [2024-07-26 12:20:25.606809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:37.703 [2024-07-26 12:20:25.606820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:37.703 [2024-07-26 12:20:25.606831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:37.703 [2024-07-26 12:20:25.606842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:37.703 [2024-07-26 12:20:25.606853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:37.703 [2024-07-26 12:20:25.606864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:37.703 [2024-07-26 12:20:25.606875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:37.703 [2024-07-26 12:20:25.606886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:37.703 [2024-07-26 12:20:25.606897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:37.703 [2024-07-26 12:20:25.606908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:37.703 [2024-07-26 12:20:25.606919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:37.703 [2024-07-26 12:20:25.606930] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:37.703 [2024-07-26 12:20:25.606941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:37.703 [2024-07-26 12:20:25.606956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:37.703 [2024-07-26 12:20:25.606967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:37.703 [2024-07-26 12:20:25.606979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:37.703 [2024-07-26 12:20:25.606991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:37.703 [2024-07-26 12:20:25.607002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.703 [2024-07-26 12:20:25.607013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:37.703 [2024-07-26 12:20:25.607024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:26:37.703 [2024-07-26 12:20:25.607036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.703 [2024-07-26 12:20:25.664752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.703 [2024-07-26 12:20:25.664816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:37.703 [2024-07-26 12:20:25.664832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.753 ms 00:26:37.703 [2024-07-26 12:20:25.664843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.703 [2024-07-26 12:20:25.664943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.703 [2024-07-26 12:20:25.664954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:37.703 [2024-07-26 12:20:25.664964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:37.703 [2024-07-26 12:20:25.664974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.963 [2024-07-26 12:20:25.717081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.963 [2024-07-26 12:20:25.717156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:37.963 [2024-07-26 12:20:25.717172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.091 ms 00:26:37.963 [2024-07-26 12:20:25.717183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.963 [2024-07-26 12:20:25.717254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.963 [2024-07-26 12:20:25.717266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:37.963 [2024-07-26 12:20:25.717278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:37.963 [2024-07-26 12:20:25.717292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.963 [2024-07-26 12:20:25.717815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.963 [2024-07-26 12:20:25.717836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:37.963 [2024-07-26 12:20:25.717848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:26:37.963 [2024-07-26 12:20:25.717858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.963 [2024-07-26 12:20:25.718000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.963 [2024-07-26 12:20:25.718014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:37.963 [2024-07-26 12:20:25.718026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:26:37.963 [2024-07-26 12:20:25.718036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.963 [2024-07-26 12:20:25.739116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.963 [2024-07-26 12:20:25.739180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:37.963 [2024-07-26 12:20:25.739196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.086 ms 00:26:37.963 [2024-07-26 12:20:25.739211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.963 [2024-07-26 12:20:25.760485] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:37.963 [2024-07-26 12:20:25.760550] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:37.963 [2024-07-26 12:20:25.760570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.963 [2024-07-26 12:20:25.760581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:37.963 [2024-07-26 12:20:25.760593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.246 ms 00:26:37.963 [2024-07-26 12:20:25.760603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.963 [2024-07-26 12:20:25.791857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.963 [2024-07-26 12:20:25.791952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:37.963 [2024-07-26 12:20:25.791968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.229 ms 00:26:37.963 [2024-07-26 12:20:25.791979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.963 [2024-07-26 12:20:25.813182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.963 [2024-07-26 12:20:25.813232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:37.963 [2024-07-26 12:20:25.813246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.149 ms 00:26:37.963 [2024-07-26 12:20:25.813256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.963 [2024-07-26 12:20:25.834707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.963 [2024-07-26 12:20:25.834774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:37.963 [2024-07-26 12:20:25.834789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.408 ms 00:26:37.963 [2024-07-26 12:20:25.834800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.963 [2024-07-26 12:20:25.835775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.963 [2024-07-26 12:20:25.835808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:37.963 [2024-07-26 12:20:25.835820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:26:37.963 [2024-07-26 12:20:25.835831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.963 [2024-07-26 12:20:25.926759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.963 [2024-07-26 12:20:25.926819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:37.963 [2024-07-26 12:20:25.926836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.044 ms 00:26:37.963 [2024-07-26 12:20:25.926852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.223 [2024-07-26 12:20:25.941664] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:38.223 [2024-07-26 12:20:25.944986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.223 [2024-07-26 12:20:25.945028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:38.223 [2024-07-26 12:20:25.945043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.082 ms 00:26:38.223 [2024-07-26 12:20:25.945052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.223 [2024-07-26 12:20:25.945178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.223 [2024-07-26 12:20:25.945192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:38.223 [2024-07-26 12:20:25.945204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:38.223 [2024-07-26 12:20:25.945214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.223 [2024-07-26 12:20:25.946157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.223 [2024-07-26 12:20:25.946184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:38.223 [2024-07-26 12:20:25.946196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:26:38.223 [2024-07-26 12:20:25.946207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.223 [2024-07-26 12:20:25.946235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.223 [2024-07-26 12:20:25.946246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:38.223 [2024-07-26 12:20:25.946257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:38.223 [2024-07-26 12:20:25.946267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.223 [2024-07-26 12:20:25.946302] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:38.223 [2024-07-26 12:20:25.946314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.223 [2024-07-26 12:20:25.946328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:38.223 [2024-07-26 12:20:25.946338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:38.223 [2024-07-26 12:20:25.946349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.223 [2024-07-26 12:20:25.988315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.223 [2024-07-26 12:20:25.988397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:38.223 [2024-07-26 12:20:25.988415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.009 ms 00:26:38.223 [2024-07-26 12:20:25.988435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.223 [2024-07-26 12:20:25.988550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.223 [2024-07-26 12:20:25.988563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:38.223 [2024-07-26 12:20:25.988574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:38.223 [2024-07-26 12:20:25.988585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.223 [2024-07-26 12:20:25.989823] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 425.112 ms, result 0 00:27:10.474  Copying: 34/1024 [MB] (34 MBps) Copying: 64/1024 [MB] (30 MBps) Copying: 96/1024 [MB] (31 MBps) Copying: 129/1024 [MB] (32 MBps) Copying: 159/1024 [MB] (30 MBps) Copying: 189/1024 [MB] (29 MBps) Copying: 219/1024 [MB] (30 MBps) Copying: 249/1024 [MB] (30 MBps) Copying: 279/1024 [MB] (29 MBps) Copying: 308/1024 [MB] (29 MBps) Copying: 340/1024 [MB] (31 MBps) Copying: 372/1024 [MB] (32 MBps) Copying: 403/1024 [MB] (30 MBps) Copying: 434/1024 [MB] (30 MBps) Copying: 466/1024 [MB] (31 MBps) Copying: 496/1024 [MB] (30 MBps) Copying: 527/1024 [MB] (30 MBps) Copying: 557/1024 [MB] (29 MBps) Copying: 588/1024 [MB] (30 MBps) Copying: 620/1024 [MB] (32 MBps) Copying: 655/1024 [MB] (35 MBps) Copying: 688/1024 [MB] (32 MBps) Copying: 721/1024 [MB] (33 MBps) Copying: 752/1024 [MB] (30 MBps) Copying: 785/1024 [MB] (32 MBps) Copying: 819/1024 [MB] (34 MBps) Copying: 852/1024 [MB] (33 MBps) Copying: 885/1024 [MB] (32 MBps) Copying: 918/1024 [MB] (33 MBps) Copying: 953/1024 [MB] (35 MBps) Copying: 988/1024 [MB] (34 MBps) Copying: 1024/1024 [MB] (average 32 MBps)[2024-07-26 12:20:58.355865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.474 [2024-07-26 12:20:58.355950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:10.474 [2024-07-26 12:20:58.355974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:10.474 [2024-07-26 12:20:58.355990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.474 [2024-07-26 12:20:58.356028] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:10.474 [2024-07-26 12:20:58.360175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.474 [2024-07-26 12:20:58.360234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:10.474 [2024-07-26 12:20:58.360253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.125 ms 00:27:10.474 [2024-07-26 12:20:58.360276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.474 [2024-07-26 12:20:58.360551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.474 [2024-07-26 12:20:58.360571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:10.474 [2024-07-26 12:20:58.360587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:27:10.474 [2024-07-26 12:20:58.360602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.474 [2024-07-26 12:20:58.364814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.474 [2024-07-26 12:20:58.364867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:10.474 [2024-07-26 12:20:58.364886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.192 ms 00:27:10.474 [2024-07-26 12:20:58.364901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.474 [2024-07-26 12:20:58.371339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.474 [2024-07-26 12:20:58.371545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:10.474 [2024-07-26 12:20:58.371694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.396 ms 00:27:10.474 [2024-07-26 12:20:58.371749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.474 [2024-07-26 12:20:58.422818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.474 [2024-07-26 12:20:58.423147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:10.474 [2024-07-26 12:20:58.423281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.994 ms 00:27:10.474 [2024-07-26 12:20:58.423335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.474 [2024-07-26 12:20:58.450576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.474 [2024-07-26 12:20:58.450845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:10.474 [2024-07-26 12:20:58.450944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.188 ms 00:27:10.474 [2024-07-26 12:20:58.450983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.791 [2024-07-26 12:20:58.454417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.792 [2024-07-26 12:20:58.454583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:10.792 [2024-07-26 12:20:58.454685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.346 ms 00:27:10.792 [2024-07-26 12:20:58.454703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.792 [2024-07-26 12:20:58.499898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.792 [2024-07-26 12:20:58.499961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:10.792 [2024-07-26 12:20:58.499979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.234 ms 00:27:10.792 [2024-07-26 12:20:58.499990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.792 [2024-07-26 12:20:58.545418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.792 [2024-07-26 12:20:58.545484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:10.792 [2024-07-26 12:20:58.545501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.448 ms 00:27:10.792 [2024-07-26 12:20:58.545512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.792 [2024-07-26 12:20:58.589163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.792 [2024-07-26 12:20:58.589489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:10.792 [2024-07-26 12:20:58.589604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.668 ms 00:27:10.792 [2024-07-26 12:20:58.589646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.792 [2024-07-26 12:20:58.633212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.792 [2024-07-26 12:20:58.633502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:10.792 [2024-07-26 12:20:58.633525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.501 ms 00:27:10.792 [2024-07-26 12:20:58.633536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.792 [2024-07-26 12:20:58.633583] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:10.792 [2024-07-26 12:20:58.633609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:10.792 [2024-07-26 12:20:58.633623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:27:10.792 [2024-07-26 12:20:58.633635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.633996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:10.792 [2024-07-26 12:20:58.634419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:10.793 [2024-07-26 12:20:58.634734] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:10.793 [2024-07-26 12:20:58.634744] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8e1e333-1a42-4b51-b148-ec7db2a58227 00:27:10.793 [2024-07-26 12:20:58.634760] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:27:10.793 [2024-07-26 12:20:58.634770] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:10.793 [2024-07-26 12:20:58.634780] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:10.793 [2024-07-26 12:20:58.634790] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:10.793 [2024-07-26 12:20:58.634799] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:10.793 [2024-07-26 12:20:58.634809] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:10.793 [2024-07-26 12:20:58.634819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:10.793 [2024-07-26 12:20:58.634828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:10.793 [2024-07-26 12:20:58.634837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:10.793 [2024-07-26 12:20:58.634847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.793 [2024-07-26 12:20:58.634873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:10.793 [2024-07-26 12:20:58.634888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:27:10.793 [2024-07-26 12:20:58.634898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.793 [2024-07-26 12:20:58.656249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.793 [2024-07-26 12:20:58.656307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:10.793 [2024-07-26 12:20:58.656339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.330 ms 00:27:10.793 [2024-07-26 12:20:58.656350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.793 [2024-07-26 12:20:58.656894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:10.793 [2024-07-26 12:20:58.656905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:10.793 [2024-07-26 12:20:58.656916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:27:10.793 [2024-07-26 12:20:58.656930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.793 [2024-07-26 12:20:58.702524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.793 [2024-07-26 12:20:58.702575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:10.793 [2024-07-26 12:20:58.702590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.793 [2024-07-26 12:20:58.702604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.793 [2024-07-26 12:20:58.702677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.793 [2024-07-26 12:20:58.702688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:10.793 [2024-07-26 12:20:58.702698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.793 [2024-07-26 12:20:58.702712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.793 [2024-07-26 12:20:58.702809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.793 [2024-07-26 12:20:58.702823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:10.793 [2024-07-26 12:20:58.702833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.793 [2024-07-26 12:20:58.702843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.793 [2024-07-26 12:20:58.702860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.793 [2024-07-26 12:20:58.702870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:10.793 [2024-07-26 12:20:58.702880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.793 [2024-07-26 12:20:58.702891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.082 [2024-07-26 12:20:58.824985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.082 [2024-07-26 12:20:58.825045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:11.082 [2024-07-26 12:20:58.825062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.082 [2024-07-26 12:20:58.825073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.082 [2024-07-26 12:20:58.935434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.082 [2024-07-26 12:20:58.935493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:11.082 [2024-07-26 12:20:58.935508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.082 [2024-07-26 12:20:58.935529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.082 [2024-07-26 12:20:58.935623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.082 [2024-07-26 12:20:58.935635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:11.082 [2024-07-26 12:20:58.935646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.082 [2024-07-26 12:20:58.935656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.082 [2024-07-26 12:20:58.935701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.082 [2024-07-26 12:20:58.935712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:11.082 [2024-07-26 12:20:58.935722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.082 [2024-07-26 12:20:58.935732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.082 [2024-07-26 12:20:58.935853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.082 [2024-07-26 12:20:58.935867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:11.082 [2024-07-26 12:20:58.935877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.082 [2024-07-26 12:20:58.935891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.082 [2024-07-26 12:20:58.935924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.082 [2024-07-26 12:20:58.935935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:11.082 [2024-07-26 12:20:58.935945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.082 [2024-07-26 12:20:58.935955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.082 [2024-07-26 12:20:58.936000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.082 [2024-07-26 12:20:58.936011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:11.082 [2024-07-26 12:20:58.936020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.082 [2024-07-26 12:20:58.936030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.082 [2024-07-26 12:20:58.936101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:11.082 [2024-07-26 12:20:58.936117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:11.082 [2024-07-26 12:20:58.936163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:11.082 [2024-07-26 12:20:58.936173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:11.082 [2024-07-26 12:20:58.936297] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 581.357 ms, result 0 00:27:12.460 00:27:12.460 00:27:12.460 12:21:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:14.368 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:14.368 12:21:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:14.368 12:21:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:14.368 12:21:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:14.368 12:21:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:14.368 12:21:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:14.627 12:21:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:14.627 12:21:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:14.627 12:21:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82124 00:27:14.627 12:21:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 82124 ']' 00:27:14.627 12:21:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 82124 00:27:14.627 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (82124) - No such process 00:27:14.627 Process with pid 82124 is not found 00:27:14.627 12:21:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 82124 is not found' 00:27:14.627 12:21:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:14.886 Remove shared memory files 00:27:14.886 12:21:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:14.886 12:21:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:14.886 12:21:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:14.886 12:21:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:14.886 12:21:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:14.886 12:21:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:14.886 12:21:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:14.886 ************************************ 00:27:14.886 END TEST ftl_dirty_shutdown 00:27:14.886 ************************************ 00:27:14.886 00:27:14.886 real 3m16.022s 00:27:14.886 user 3m42.730s 00:27:14.886 sys 0m34.599s 00:27:14.886 12:21:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:14.886 12:21:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:14.886 12:21:02 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:14.886 12:21:02 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:14.886 12:21:02 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:14.886 12:21:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:15.159 ************************************ 00:27:15.159 START TEST ftl_upgrade_shutdown 00:27:15.159 ************************************ 00:27:15.159 12:21:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:15.159 * Looking for test storage... 00:27:15.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84222 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84222 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 84222 ']' 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:15.159 12:21:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:15.430 [2024-07-26 12:21:03.162838] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:27:15.430 [2024-07-26 12:21:03.163247] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84222 ] 00:27:15.430 [2024-07-26 12:21:03.325302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.687 [2024-07-26 12:21:03.611035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:17.064 12:21:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:17.324 12:21:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:17.324 { 00:27:17.324 "name": "basen1", 00:27:17.324 "aliases": [ 00:27:17.324 "87dc8071-6bb2-4542-884d-e06589dd274c" 00:27:17.324 ], 00:27:17.324 "product_name": "NVMe disk", 00:27:17.324 "block_size": 4096, 00:27:17.324 "num_blocks": 1310720, 00:27:17.324 "uuid": "87dc8071-6bb2-4542-884d-e06589dd274c", 00:27:17.324 "assigned_rate_limits": { 00:27:17.324 "rw_ios_per_sec": 0, 00:27:17.324 "rw_mbytes_per_sec": 0, 00:27:17.324 "r_mbytes_per_sec": 0, 00:27:17.324 "w_mbytes_per_sec": 0 00:27:17.324 }, 00:27:17.324 "claimed": true, 00:27:17.324 "claim_type": "read_many_write_one", 00:27:17.324 "zoned": false, 00:27:17.324 "supported_io_types": { 00:27:17.324 "read": true, 00:27:17.324 "write": true, 00:27:17.324 "unmap": true, 00:27:17.324 "flush": true, 00:27:17.324 "reset": true, 00:27:17.324 "nvme_admin": true, 00:27:17.324 "nvme_io": true, 00:27:17.324 "nvme_io_md": false, 00:27:17.324 "write_zeroes": true, 00:27:17.324 "zcopy": false, 00:27:17.324 "get_zone_info": false, 00:27:17.324 "zone_management": false, 00:27:17.324 "zone_append": false, 00:27:17.324 "compare": true, 00:27:17.324 "compare_and_write": false, 00:27:17.324 "abort": true, 00:27:17.324 "seek_hole": false, 00:27:17.324 "seek_data": false, 00:27:17.324 "copy": true, 00:27:17.324 "nvme_iov_md": false 00:27:17.324 }, 00:27:17.324 "driver_specific": { 00:27:17.324 "nvme": [ 00:27:17.324 { 00:27:17.324 "pci_address": "0000:00:11.0", 00:27:17.324 "trid": { 00:27:17.324 "trtype": "PCIe", 00:27:17.324 "traddr": "0000:00:11.0" 00:27:17.324 }, 00:27:17.324 "ctrlr_data": { 00:27:17.324 "cntlid": 0, 00:27:17.324 "vendor_id": "0x1b36", 00:27:17.324 "model_number": "QEMU NVMe Ctrl", 00:27:17.324 "serial_number": "12341", 00:27:17.324 "firmware_revision": "8.0.0", 00:27:17.324 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:17.324 "oacs": { 00:27:17.324 "security": 0, 00:27:17.324 "format": 1, 00:27:17.324 "firmware": 0, 00:27:17.324 "ns_manage": 1 00:27:17.324 }, 00:27:17.324 "multi_ctrlr": false, 00:27:17.324 "ana_reporting": false 00:27:17.324 }, 00:27:17.324 "vs": { 00:27:17.324 "nvme_version": "1.4" 00:27:17.324 }, 00:27:17.324 "ns_data": { 00:27:17.324 "id": 1, 00:27:17.324 "can_share": false 00:27:17.324 } 00:27:17.324 } 00:27:17.324 ], 00:27:17.324 "mp_policy": "active_passive" 00:27:17.324 } 00:27:17.324 } 00:27:17.324 ]' 00:27:17.324 12:21:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:17.324 12:21:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:17.324 12:21:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:17.324 12:21:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:17.324 12:21:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:17.324 12:21:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:27:17.324 12:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:17.324 12:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:17.324 12:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:17.324 12:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:17.324 12:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:17.583 12:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=f3dc334c-b128-475d-9e2a-c2201f963684 00:27:17.583 12:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:17.583 12:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f3dc334c-b128-475d-9e2a-c2201f963684 00:27:17.842 12:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:18.101 12:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=5106f4cf-3eff-4051-9fdd-24e5d7e6abc5 00:27:18.101 12:21:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 5106f4cf-3eff-4051-9fdd-24e5d7e6abc5 00:27:18.360 12:21:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=f373d762-5924-47b4-a490-a8ba23d24b95 00:27:18.360 12:21:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z f373d762-5924-47b4-a490-a8ba23d24b95 ]] 00:27:18.360 12:21:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 f373d762-5924-47b4-a490-a8ba23d24b95 5120 00:27:18.360 12:21:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:18.360 12:21:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:18.360 12:21:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=f373d762-5924-47b4-a490-a8ba23d24b95 00:27:18.360 12:21:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:18.360 12:21:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size f373d762-5924-47b4-a490-a8ba23d24b95 00:27:18.360 12:21:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=f373d762-5924-47b4-a490-a8ba23d24b95 00:27:18.360 12:21:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:18.360 12:21:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:18.360 12:21:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:18.360 12:21:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f373d762-5924-47b4-a490-a8ba23d24b95 00:27:18.619 12:21:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:18.619 { 00:27:18.619 "name": "f373d762-5924-47b4-a490-a8ba23d24b95", 00:27:18.619 "aliases": [ 00:27:18.619 "lvs/basen1p0" 00:27:18.619 ], 00:27:18.619 "product_name": "Logical Volume", 00:27:18.619 "block_size": 4096, 00:27:18.619 "num_blocks": 5242880, 00:27:18.619 "uuid": "f373d762-5924-47b4-a490-a8ba23d24b95", 00:27:18.619 "assigned_rate_limits": { 00:27:18.619 "rw_ios_per_sec": 0, 00:27:18.619 "rw_mbytes_per_sec": 0, 00:27:18.619 "r_mbytes_per_sec": 0, 00:27:18.619 "w_mbytes_per_sec": 0 00:27:18.619 }, 00:27:18.619 "claimed": false, 00:27:18.619 "zoned": false, 00:27:18.619 "supported_io_types": { 00:27:18.619 "read": true, 00:27:18.619 "write": true, 00:27:18.619 "unmap": true, 00:27:18.619 "flush": false, 00:27:18.619 "reset": true, 00:27:18.619 "nvme_admin": false, 00:27:18.619 "nvme_io": false, 00:27:18.619 "nvme_io_md": false, 00:27:18.619 "write_zeroes": true, 00:27:18.619 "zcopy": false, 00:27:18.619 "get_zone_info": false, 00:27:18.619 "zone_management": false, 00:27:18.619 "zone_append": false, 00:27:18.619 "compare": false, 00:27:18.619 "compare_and_write": false, 00:27:18.619 "abort": false, 00:27:18.619 "seek_hole": true, 00:27:18.619 "seek_data": true, 00:27:18.619 "copy": false, 00:27:18.619 "nvme_iov_md": false 00:27:18.619 }, 00:27:18.619 "driver_specific": { 00:27:18.619 "lvol": { 00:27:18.619 "lvol_store_uuid": "5106f4cf-3eff-4051-9fdd-24e5d7e6abc5", 00:27:18.619 "base_bdev": "basen1", 00:27:18.619 "thin_provision": true, 00:27:18.619 "num_allocated_clusters": 0, 00:27:18.619 "snapshot": false, 00:27:18.619 "clone": false, 00:27:18.619 "esnap_clone": false 00:27:18.619 } 00:27:18.619 } 00:27:18.619 } 00:27:18.619 ]' 00:27:18.619 12:21:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:18.619 12:21:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:18.619 12:21:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:18.619 12:21:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:27:18.619 12:21:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:27:18.619 12:21:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:27:18.619 12:21:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:18.619 12:21:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:18.619 12:21:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:18.878 12:21:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:18.878 12:21:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:18.878 12:21:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:19.137 12:21:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:19.137 12:21:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:19.137 12:21:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d f373d762-5924-47b4-a490-a8ba23d24b95 -c cachen1p0 --l2p_dram_limit 2 00:27:19.426 [2024-07-26 12:21:07.259106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.426 [2024-07-26 12:21:07.259181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:19.426 [2024-07-26 12:21:07.259201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:19.426 [2024-07-26 12:21:07.259216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.426 [2024-07-26 12:21:07.259288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.426 [2024-07-26 12:21:07.259304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:19.426 [2024-07-26 12:21:07.259316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:27:19.426 [2024-07-26 12:21:07.259331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.426 [2024-07-26 12:21:07.259354] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:19.426 [2024-07-26 12:21:07.260540] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:19.426 [2024-07-26 12:21:07.260575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.426 [2024-07-26 12:21:07.260594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:19.426 [2024-07-26 12:21:07.260606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.228 ms 00:27:19.426 [2024-07-26 12:21:07.260620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.426 [2024-07-26 12:21:07.260743] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 1b0154a1-58e8-4c6b-b408-534781e5ff98 00:27:19.426 [2024-07-26 12:21:07.262253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.426 [2024-07-26 12:21:07.262291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:19.426 [2024-07-26 12:21:07.262308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:19.426 [2024-07-26 12:21:07.262319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.426 [2024-07-26 12:21:07.269966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.426 [2024-07-26 12:21:07.270008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:19.426 [2024-07-26 12:21:07.270025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.600 ms 00:27:19.426 [2024-07-26 12:21:07.270036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.426 [2024-07-26 12:21:07.270106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.426 [2024-07-26 12:21:07.270145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:19.426 [2024-07-26 12:21:07.270161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:19.426 [2024-07-26 12:21:07.270172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.426 [2024-07-26 12:21:07.270262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.426 [2024-07-26 12:21:07.270275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:19.426 [2024-07-26 12:21:07.270292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:19.426 [2024-07-26 12:21:07.270303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.426 [2024-07-26 12:21:07.270334] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:19.426 [2024-07-26 12:21:07.276312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.426 [2024-07-26 12:21:07.276367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:19.426 [2024-07-26 12:21:07.276381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.997 ms 00:27:19.426 [2024-07-26 12:21:07.276396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.426 [2024-07-26 12:21:07.276439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.426 [2024-07-26 12:21:07.276455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:19.426 [2024-07-26 12:21:07.276466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:19.426 [2024-07-26 12:21:07.276480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.426 [2024-07-26 12:21:07.276566] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:19.426 [2024-07-26 12:21:07.276712] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:19.426 [2024-07-26 12:21:07.276727] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:19.426 [2024-07-26 12:21:07.276747] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:27:19.426 [2024-07-26 12:21:07.276762] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:19.426 [2024-07-26 12:21:07.276777] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:19.426 [2024-07-26 12:21:07.276789] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:19.426 [2024-07-26 12:21:07.276807] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:19.426 [2024-07-26 12:21:07.276817] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:19.426 [2024-07-26 12:21:07.276830] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:19.426 [2024-07-26 12:21:07.276841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.426 [2024-07-26 12:21:07.276854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:19.426 [2024-07-26 12:21:07.276865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.278 ms 00:27:19.426 [2024-07-26 12:21:07.276878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.426 [2024-07-26 12:21:07.276953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.426 [2024-07-26 12:21:07.276968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:19.426 [2024-07-26 12:21:07.276978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:27:19.426 [2024-07-26 12:21:07.276995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.426 [2024-07-26 12:21:07.277084] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:19.426 [2024-07-26 12:21:07.277102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:19.426 [2024-07-26 12:21:07.277114] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:19.426 [2024-07-26 12:21:07.277147] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:19.426 [2024-07-26 12:21:07.277159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:19.426 [2024-07-26 12:21:07.277172] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:19.426 [2024-07-26 12:21:07.277195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:19.426 [2024-07-26 12:21:07.277209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:19.426 [2024-07-26 12:21:07.277219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:19.426 [2024-07-26 12:21:07.277233] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:19.426 [2024-07-26 12:21:07.277243] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:19.426 [2024-07-26 12:21:07.277256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:19.426 [2024-07-26 12:21:07.277266] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:19.426 [2024-07-26 12:21:07.277279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:19.427 [2024-07-26 12:21:07.277288] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:19.427 [2024-07-26 12:21:07.277301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:19.427 [2024-07-26 12:21:07.277311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:19.427 [2024-07-26 12:21:07.277326] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:19.427 [2024-07-26 12:21:07.277335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:19.427 [2024-07-26 12:21:07.277352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:19.427 [2024-07-26 12:21:07.277363] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:19.427 [2024-07-26 12:21:07.277378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:19.427 [2024-07-26 12:21:07.277387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:19.427 [2024-07-26 12:21:07.277399] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:19.427 [2024-07-26 12:21:07.277409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:19.427 [2024-07-26 12:21:07.277422] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:19.427 [2024-07-26 12:21:07.277431] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:19.427 [2024-07-26 12:21:07.277443] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:19.427 [2024-07-26 12:21:07.277453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:19.427 [2024-07-26 12:21:07.277465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:19.427 [2024-07-26 12:21:07.277475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:19.427 [2024-07-26 12:21:07.277487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:19.427 [2024-07-26 12:21:07.277497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:19.427 [2024-07-26 12:21:07.277512] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:19.427 [2024-07-26 12:21:07.277522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:19.427 [2024-07-26 12:21:07.277536] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:19.427 [2024-07-26 12:21:07.277545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:19.427 [2024-07-26 12:21:07.277558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:19.427 [2024-07-26 12:21:07.277567] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:19.427 [2024-07-26 12:21:07.277579] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:19.427 [2024-07-26 12:21:07.277589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:19.427 [2024-07-26 12:21:07.277618] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:19.427 [2024-07-26 12:21:07.277628] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:19.427 [2024-07-26 12:21:07.277640] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:19.427 [2024-07-26 12:21:07.277651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:19.427 [2024-07-26 12:21:07.277664] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:19.427 [2024-07-26 12:21:07.277675] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:19.427 [2024-07-26 12:21:07.277689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:19.427 [2024-07-26 12:21:07.277699] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:19.427 [2024-07-26 12:21:07.277714] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:19.427 [2024-07-26 12:21:07.277724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:19.427 [2024-07-26 12:21:07.277737] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:19.427 [2024-07-26 12:21:07.277747] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:19.427 [2024-07-26 12:21:07.277765] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:19.427 [2024-07-26 12:21:07.277782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:19.427 [2024-07-26 12:21:07.277798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:19.427 [2024-07-26 12:21:07.277809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:19.427 [2024-07-26 12:21:07.277823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:19.427 [2024-07-26 12:21:07.277834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:19.427 [2024-07-26 12:21:07.277850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:19.427 [2024-07-26 12:21:07.277861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:19.427 [2024-07-26 12:21:07.277875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:19.427 [2024-07-26 12:21:07.277886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:19.427 [2024-07-26 12:21:07.277899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:19.427 [2024-07-26 12:21:07.277910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:19.427 [2024-07-26 12:21:07.277926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:19.427 [2024-07-26 12:21:07.277937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:19.427 [2024-07-26 12:21:07.277951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:19.427 [2024-07-26 12:21:07.277962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:19.427 [2024-07-26 12:21:07.277978] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:19.427 [2024-07-26 12:21:07.277990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:19.427 [2024-07-26 12:21:07.278004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:19.427 [2024-07-26 12:21:07.278016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:19.427 [2024-07-26 12:21:07.278030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:19.427 [2024-07-26 12:21:07.278041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:19.427 [2024-07-26 12:21:07.278056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:19.427 [2024-07-26 12:21:07.278067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:19.427 [2024-07-26 12:21:07.278081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.024 ms 00:27:19.427 [2024-07-26 12:21:07.278092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:19.427 [2024-07-26 12:21:07.278153] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:19.427 [2024-07-26 12:21:07.278168] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:21.957 [2024-07-26 12:21:09.876778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.957 [2024-07-26 12:21:09.876837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:21.957 [2024-07-26 12:21:09.876857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2602.833 ms 00:27:21.957 [2024-07-26 12:21:09.876869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.957 [2024-07-26 12:21:09.921610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.957 [2024-07-26 12:21:09.921664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:21.957 [2024-07-26 12:21:09.921683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.510 ms 00:27:21.957 [2024-07-26 12:21:09.921694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:21.957 [2024-07-26 12:21:09.921812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:21.957 [2024-07-26 12:21:09.921826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:21.957 [2024-07-26 12:21:09.921843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:21.957 [2024-07-26 12:21:09.921853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.216 [2024-07-26 12:21:09.970304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.216 [2024-07-26 12:21:09.970357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:22.216 [2024-07-26 12:21:09.970376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.463 ms 00:27:22.216 [2024-07-26 12:21:09.970386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.216 [2024-07-26 12:21:09.970446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.216 [2024-07-26 12:21:09.970457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:22.216 [2024-07-26 12:21:09.970475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:22.216 [2024-07-26 12:21:09.970485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.216 [2024-07-26 12:21:09.970974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.216 [2024-07-26 12:21:09.970988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:22.216 [2024-07-26 12:21:09.971001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.417 ms 00:27:22.216 [2024-07-26 12:21:09.971011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.216 [2024-07-26 12:21:09.971064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.216 [2024-07-26 12:21:09.971079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:22.216 [2024-07-26 12:21:09.971092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:22.216 [2024-07-26 12:21:09.971102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.216 [2024-07-26 12:21:09.993884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.216 [2024-07-26 12:21:09.993936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:22.216 [2024-07-26 12:21:09.993956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.771 ms 00:27:22.216 [2024-07-26 12:21:09.993968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.216 [2024-07-26 12:21:10.009817] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:22.216 [2024-07-26 12:21:10.010930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.216 [2024-07-26 12:21:10.010960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:22.217 [2024-07-26 12:21:10.010975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.853 ms 00:27:22.217 [2024-07-26 12:21:10.010988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.217 [2024-07-26 12:21:10.058973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.217 [2024-07-26 12:21:10.059059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:22.217 [2024-07-26 12:21:10.059077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.009 ms 00:27:22.217 [2024-07-26 12:21:10.059090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.217 [2024-07-26 12:21:10.059245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.217 [2024-07-26 12:21:10.059263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:22.217 [2024-07-26 12:21:10.059274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:27:22.217 [2024-07-26 12:21:10.059292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.217 [2024-07-26 12:21:10.100336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.217 [2024-07-26 12:21:10.100414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:22.217 [2024-07-26 12:21:10.100430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.030 ms 00:27:22.217 [2024-07-26 12:21:10.100447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.217 [2024-07-26 12:21:10.139321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.217 [2024-07-26 12:21:10.139396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:22.217 [2024-07-26 12:21:10.139412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.852 ms 00:27:22.217 [2024-07-26 12:21:10.139425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.217 [2024-07-26 12:21:10.140198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.217 [2024-07-26 12:21:10.140230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:22.217 [2024-07-26 12:21:10.140246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.703 ms 00:27:22.217 [2024-07-26 12:21:10.140259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.476 [2024-07-26 12:21:10.251972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.476 [2024-07-26 12:21:10.252049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:22.476 [2024-07-26 12:21:10.252067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 111.797 ms 00:27:22.476 [2024-07-26 12:21:10.252085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.476 [2024-07-26 12:21:10.294952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.476 [2024-07-26 12:21:10.295025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:22.476 [2024-07-26 12:21:10.295041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.858 ms 00:27:22.476 [2024-07-26 12:21:10.295055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.476 [2024-07-26 12:21:10.334063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.476 [2024-07-26 12:21:10.334145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:22.476 [2024-07-26 12:21:10.334180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.991 ms 00:27:22.476 [2024-07-26 12:21:10.334193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.476 [2024-07-26 12:21:10.373560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.476 [2024-07-26 12:21:10.373653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:22.476 [2024-07-26 12:21:10.373669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.367 ms 00:27:22.476 [2024-07-26 12:21:10.373682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.476 [2024-07-26 12:21:10.373758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.476 [2024-07-26 12:21:10.373773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:22.476 [2024-07-26 12:21:10.373785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:22.476 [2024-07-26 12:21:10.373802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.476 [2024-07-26 12:21:10.373917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:22.476 [2024-07-26 12:21:10.373937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:22.476 [2024-07-26 12:21:10.373948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:22.476 [2024-07-26 12:21:10.373960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:22.476 [2024-07-26 12:21:10.375077] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3120.522 ms, result 0 00:27:22.476 { 00:27:22.476 "name": "ftl", 00:27:22.476 "uuid": "1b0154a1-58e8-4c6b-b408-534781e5ff98" 00:27:22.476 } 00:27:22.476 12:21:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:22.735 [2024-07-26 12:21:10.649874] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.735 12:21:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:22.994 12:21:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:23.254 [2024-07-26 12:21:11.053931] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:23.254 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:23.514 [2024-07-26 12:21:11.236618] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:23.514 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:23.773 Fill FTL, iteration 1 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84344 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84344 /var/tmp/spdk.tgt.sock 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 84344 ']' 00:27:23.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:23.773 12:21:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:23.774 12:21:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:23.774 12:21:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:23.774 12:21:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:23.774 [2024-07-26 12:21:11.654336] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:27:23.774 [2024-07-26 12:21:11.654477] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84344 ] 00:27:24.033 [2024-07-26 12:21:11.812188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.291 [2024-07-26 12:21:12.078037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.230 12:21:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:25.230 12:21:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:25.230 12:21:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:25.488 ftln1 00:27:25.488 12:21:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:25.488 12:21:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:25.747 12:21:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:25.747 12:21:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84344 00:27:25.747 12:21:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 84344 ']' 00:27:25.747 12:21:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 84344 00:27:25.747 12:21:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:25.747 12:21:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:25.747 12:21:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84344 00:27:25.747 killing process with pid 84344 00:27:25.747 12:21:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:25.747 12:21:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:25.747 12:21:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84344' 00:27:25.747 12:21:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 84344 00:27:25.747 12:21:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 84344 00:27:28.284 12:21:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:28.284 12:21:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:28.542 [2024-07-26 12:21:16.324304] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:27:28.542 [2024-07-26 12:21:16.324432] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84409 ] 00:27:28.542 [2024-07-26 12:21:16.495368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.109 [2024-07-26 12:21:16.784507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.744  Copying: 273/1024 [MB] (273 MBps) Copying: 539/1024 [MB] (266 MBps) Copying: 791/1024 [MB] (252 MBps) Copying: 1024/1024 [MB] (average 258 MBps) 00:27:34.744 00:27:34.744 Calculate MD5 checksum, iteration 1 00:27:34.744 12:21:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:34.744 12:21:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:34.744 12:21:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:34.744 12:21:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:34.744 12:21:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:34.744 12:21:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:34.744 12:21:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:34.744 12:21:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:34.744 [2024-07-26 12:21:22.720869] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:27:34.744 [2024-07-26 12:21:22.720998] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84473 ] 00:27:35.003 [2024-07-26 12:21:22.890451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.261 [2024-07-26 12:21:23.172753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.796  Copying: 674/1024 [MB] (674 MBps) Copying: 1024/1024 [MB] (average 655 MBps) 00:27:38.796 00:27:38.796 12:21:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:38.796 12:21:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:40.703 12:21:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:40.703 Fill FTL, iteration 2 00:27:40.703 12:21:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c15b0ecffaf95b1a163104704268d36d 00:27:40.703 12:21:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:40.703 12:21:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:40.703 12:21:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:40.703 12:21:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:40.703 12:21:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:40.703 12:21:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:40.703 12:21:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:40.703 12:21:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:40.703 12:21:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:40.703 [2024-07-26 12:21:28.295214] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:27:40.703 [2024-07-26 12:21:28.295351] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84531 ] 00:27:40.703 [2024-07-26 12:21:28.469035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.962 [2024-07-26 12:21:28.723120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.993  Copying: 244/1024 [MB] (244 MBps) Copying: 503/1024 [MB] (259 MBps) Copying: 750/1024 [MB] (247 MBps) Copying: 987/1024 [MB] (237 MBps) Copying: 1024/1024 [MB] (average 246 MBps) 00:27:46.993 00:27:46.993 Calculate MD5 checksum, iteration 2 00:27:46.993 12:21:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:46.993 12:21:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:46.993 12:21:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:46.993 12:21:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:46.993 12:21:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:46.993 12:21:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:46.993 12:21:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:46.993 12:21:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:46.993 [2024-07-26 12:21:34.889177] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:27:46.993 [2024-07-26 12:21:34.889315] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84605 ] 00:27:47.251 [2024-07-26 12:21:35.062282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.510 [2024-07-26 12:21:35.307455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.348  Copying: 620/1024 [MB] (620 MBps) Copying: 1024/1024 [MB] (average 632 MBps) 00:27:51.348 00:27:51.348 12:21:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:51.348 12:21:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:53.325 12:21:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:53.325 12:21:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=463b9dcf4a4d5c57a5dd524cfb4053c1 00:27:53.325 12:21:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:53.325 12:21:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:53.325 12:21:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:53.325 [2024-07-26 12:21:41.295774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.325 [2024-07-26 12:21:41.295846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:53.325 [2024-07-26 12:21:41.295863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:53.325 [2024-07-26 12:21:41.295878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.325 [2024-07-26 12:21:41.295913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.325 [2024-07-26 12:21:41.295924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:53.325 [2024-07-26 12:21:41.295936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:53.325 [2024-07-26 12:21:41.295947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.325 [2024-07-26 12:21:41.295980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.325 [2024-07-26 12:21:41.295992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:53.325 [2024-07-26 12:21:41.296003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:53.325 [2024-07-26 12:21:41.296014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.325 [2024-07-26 12:21:41.296081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.299 ms, result 0 00:27:53.325 true 00:27:53.584 12:21:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:53.584 { 00:27:53.584 "name": "ftl", 00:27:53.584 "properties": [ 00:27:53.584 { 00:27:53.584 "name": "superblock_version", 00:27:53.584 "value": 5, 00:27:53.584 "read-only": true 00:27:53.584 }, 00:27:53.584 { 00:27:53.584 "name": "base_device", 00:27:53.584 "bands": [ 00:27:53.584 { 00:27:53.584 "id": 0, 00:27:53.584 "state": "FREE", 00:27:53.584 "validity": 0.0 00:27:53.584 }, 00:27:53.584 { 00:27:53.584 "id": 1, 00:27:53.584 "state": "FREE", 00:27:53.584 "validity": 0.0 00:27:53.584 }, 00:27:53.584 { 00:27:53.584 "id": 2, 00:27:53.584 "state": "FREE", 00:27:53.584 "validity": 0.0 00:27:53.584 }, 00:27:53.584 { 00:27:53.584 "id": 3, 00:27:53.584 "state": "FREE", 00:27:53.584 "validity": 0.0 00:27:53.584 }, 00:27:53.584 { 00:27:53.584 "id": 4, 00:27:53.584 "state": "FREE", 00:27:53.584 "validity": 0.0 00:27:53.584 }, 00:27:53.584 { 00:27:53.584 "id": 5, 00:27:53.584 "state": "FREE", 00:27:53.584 "validity": 0.0 00:27:53.584 }, 00:27:53.584 { 00:27:53.584 "id": 6, 00:27:53.584 "state": "FREE", 00:27:53.584 "validity": 0.0 00:27:53.584 }, 00:27:53.584 { 00:27:53.584 "id": 7, 00:27:53.584 "state": "FREE", 00:27:53.584 "validity": 0.0 00:27:53.584 }, 00:27:53.584 { 00:27:53.584 "id": 8, 00:27:53.584 "state": "FREE", 00:27:53.584 "validity": 0.0 00:27:53.584 }, 00:27:53.584 { 00:27:53.584 "id": 9, 00:27:53.584 "state": "FREE", 00:27:53.584 "validity": 0.0 00:27:53.584 }, 00:27:53.584 { 00:27:53.584 "id": 10, 00:27:53.584 "state": "FREE", 00:27:53.584 "validity": 0.0 00:27:53.584 }, 00:27:53.584 { 00:27:53.584 "id": 11, 00:27:53.584 "state": "FREE", 00:27:53.584 "validity": 0.0 00:27:53.584 }, 00:27:53.584 { 00:27:53.584 "id": 12, 00:27:53.584 "state": "FREE", 00:27:53.585 "validity": 0.0 00:27:53.585 }, 00:27:53.585 { 00:27:53.585 "id": 13, 00:27:53.585 "state": "FREE", 00:27:53.585 "validity": 0.0 00:27:53.585 }, 00:27:53.585 { 00:27:53.585 "id": 14, 00:27:53.585 "state": "FREE", 00:27:53.585 "validity": 0.0 00:27:53.585 }, 00:27:53.585 { 00:27:53.585 "id": 15, 00:27:53.585 "state": "FREE", 00:27:53.585 "validity": 0.0 00:27:53.585 }, 00:27:53.585 { 00:27:53.585 "id": 16, 00:27:53.585 "state": "FREE", 00:27:53.585 "validity": 0.0 00:27:53.585 }, 00:27:53.585 { 00:27:53.585 "id": 17, 00:27:53.585 "state": "FREE", 00:27:53.585 "validity": 0.0 00:27:53.585 } 00:27:53.585 ], 00:27:53.585 "read-only": true 00:27:53.585 }, 00:27:53.585 { 00:27:53.585 "name": "cache_device", 00:27:53.585 "type": "bdev", 00:27:53.585 "chunks": [ 00:27:53.585 { 00:27:53.585 "id": 0, 00:27:53.585 "state": "INACTIVE", 00:27:53.585 "utilization": 0.0 00:27:53.585 }, 00:27:53.585 { 00:27:53.585 "id": 1, 00:27:53.585 "state": "CLOSED", 00:27:53.585 "utilization": 1.0 00:27:53.585 }, 00:27:53.585 { 00:27:53.585 "id": 2, 00:27:53.585 "state": "CLOSED", 00:27:53.585 "utilization": 1.0 00:27:53.585 }, 00:27:53.585 { 00:27:53.585 "id": 3, 00:27:53.585 "state": "OPEN", 00:27:53.585 "utilization": 0.001953125 00:27:53.585 }, 00:27:53.585 { 00:27:53.585 "id": 4, 00:27:53.585 "state": "OPEN", 00:27:53.585 "utilization": 0.0 00:27:53.585 } 00:27:53.585 ], 00:27:53.585 "read-only": true 00:27:53.585 }, 00:27:53.585 { 00:27:53.585 "name": "verbose_mode", 00:27:53.585 "value": true, 00:27:53.585 "unit": "", 00:27:53.585 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:53.585 }, 00:27:53.585 { 00:27:53.585 "name": "prep_upgrade_on_shutdown", 00:27:53.585 "value": false, 00:27:53.585 "unit": "", 00:27:53.585 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:53.585 } 00:27:53.585 ] 00:27:53.585 } 00:27:53.585 12:21:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:53.843 [2024-07-26 12:21:41.699462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.843 [2024-07-26 12:21:41.699530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:53.843 [2024-07-26 12:21:41.699546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:53.843 [2024-07-26 12:21:41.699557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.843 [2024-07-26 12:21:41.699587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.843 [2024-07-26 12:21:41.699599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:53.843 [2024-07-26 12:21:41.699609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:53.843 [2024-07-26 12:21:41.699620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.843 [2024-07-26 12:21:41.699641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:53.843 [2024-07-26 12:21:41.699652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:53.843 [2024-07-26 12:21:41.699663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:53.844 [2024-07-26 12:21:41.699674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:53.844 [2024-07-26 12:21:41.699736] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.267 ms, result 0 00:27:53.844 true 00:27:53.844 12:21:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:53.844 12:21:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:53.844 12:21:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:54.102 12:21:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:54.102 12:21:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:54.102 12:21:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:54.359 [2024-07-26 12:21:42.175240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.359 [2024-07-26 12:21:42.175299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:54.359 [2024-07-26 12:21:42.175316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:54.359 [2024-07-26 12:21:42.175327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.359 [2024-07-26 12:21:42.175356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.359 [2024-07-26 12:21:42.175369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:54.359 [2024-07-26 12:21:42.175380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:54.359 [2024-07-26 12:21:42.175391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.359 [2024-07-26 12:21:42.175413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:54.359 [2024-07-26 12:21:42.175424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:54.359 [2024-07-26 12:21:42.175435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:54.359 [2024-07-26 12:21:42.175445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:54.359 [2024-07-26 12:21:42.175510] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.266 ms, result 0 00:27:54.359 true 00:27:54.359 12:21:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:54.617 { 00:27:54.617 "name": "ftl", 00:27:54.617 "properties": [ 00:27:54.617 { 00:27:54.617 "name": "superblock_version", 00:27:54.617 "value": 5, 00:27:54.617 "read-only": true 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "name": "base_device", 00:27:54.617 "bands": [ 00:27:54.617 { 00:27:54.617 "id": 0, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 1, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 2, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 3, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 4, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 5, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 6, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 7, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 8, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 9, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 10, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 11, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 12, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 13, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 14, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 15, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 16, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 17, 00:27:54.617 "state": "FREE", 00:27:54.617 "validity": 0.0 00:27:54.617 } 00:27:54.617 ], 00:27:54.617 "read-only": true 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "name": "cache_device", 00:27:54.617 "type": "bdev", 00:27:54.617 "chunks": [ 00:27:54.617 { 00:27:54.617 "id": 0, 00:27:54.617 "state": "INACTIVE", 00:27:54.617 "utilization": 0.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 1, 00:27:54.617 "state": "CLOSED", 00:27:54.617 "utilization": 1.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 2, 00:27:54.617 "state": "CLOSED", 00:27:54.617 "utilization": 1.0 00:27:54.617 }, 00:27:54.617 { 00:27:54.617 "id": 3, 00:27:54.617 "state": "OPEN", 00:27:54.617 "utilization": 0.001953125 00:27:54.617 }, 00:27:54.617 { 00:27:54.618 "id": 4, 00:27:54.618 "state": "OPEN", 00:27:54.618 "utilization": 0.0 00:27:54.618 } 00:27:54.618 ], 00:27:54.618 "read-only": true 00:27:54.618 }, 00:27:54.618 { 00:27:54.618 "name": "verbose_mode", 00:27:54.618 "value": true, 00:27:54.618 "unit": "", 00:27:54.618 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:54.618 }, 00:27:54.618 { 00:27:54.618 "name": "prep_upgrade_on_shutdown", 00:27:54.618 "value": true, 00:27:54.618 "unit": "", 00:27:54.618 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:54.618 } 00:27:54.618 ] 00:27:54.618 } 00:27:54.618 12:21:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:54.618 12:21:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84222 ]] 00:27:54.618 12:21:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84222 00:27:54.618 12:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 84222 ']' 00:27:54.618 12:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 84222 00:27:54.618 12:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:54.618 12:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:54.618 12:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84222 00:27:54.618 killing process with pid 84222 00:27:54.618 12:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:54.618 12:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:54.618 12:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84222' 00:27:54.618 12:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 84222 00:27:54.618 12:21:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 84222 00:27:55.993 [2024-07-26 12:21:43.653914] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:55.993 [2024-07-26 12:21:43.675686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.993 [2024-07-26 12:21:43.675754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:55.993 [2024-07-26 12:21:43.675769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:55.993 [2024-07-26 12:21:43.675780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.993 [2024-07-26 12:21:43.675824] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:55.993 [2024-07-26 12:21:43.680008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.993 [2024-07-26 12:21:43.680056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:55.993 [2024-07-26 12:21:43.680081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.171 ms 00:27:55.993 [2024-07-26 12:21:43.680091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.104186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.212 [2024-07-26 12:21:51.104274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:04.212 [2024-07-26 12:21:51.104296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7436.089 ms 00:28:04.212 [2024-07-26 12:21:51.104309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.105378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.212 [2024-07-26 12:21:51.105405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:04.212 [2024-07-26 12:21:51.105419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.048 ms 00:28:04.212 [2024-07-26 12:21:51.105430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.106474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.212 [2024-07-26 12:21:51.106501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:04.212 [2024-07-26 12:21:51.106521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.014 ms 00:28:04.212 [2024-07-26 12:21:51.106532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.122944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.212 [2024-07-26 12:21:51.123012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:04.212 [2024-07-26 12:21:51.123029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.391 ms 00:28:04.212 [2024-07-26 12:21:51.123041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.132931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.212 [2024-07-26 12:21:51.133008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:04.212 [2024-07-26 12:21:51.133025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.846 ms 00:28:04.212 [2024-07-26 12:21:51.133037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.133231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.212 [2024-07-26 12:21:51.133266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:04.212 [2024-07-26 12:21:51.133279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.125 ms 00:28:04.212 [2024-07-26 12:21:51.133305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.150999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.212 [2024-07-26 12:21:51.151073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:28:04.212 [2024-07-26 12:21:51.151091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.698 ms 00:28:04.212 [2024-07-26 12:21:51.151101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.169240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.212 [2024-07-26 12:21:51.169311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:28:04.212 [2024-07-26 12:21:51.169328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.084 ms 00:28:04.212 [2024-07-26 12:21:51.169338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.187055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.212 [2024-07-26 12:21:51.187145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:04.212 [2024-07-26 12:21:51.187162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.677 ms 00:28:04.212 [2024-07-26 12:21:51.187173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.204520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.212 [2024-07-26 12:21:51.204587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:04.212 [2024-07-26 12:21:51.204604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.231 ms 00:28:04.212 [2024-07-26 12:21:51.204614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.204669] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:04.212 [2024-07-26 12:21:51.204688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:04.212 [2024-07-26 12:21:51.204702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:04.212 [2024-07-26 12:21:51.204714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:04.212 [2024-07-26 12:21:51.204726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:04.212 [2024-07-26 12:21:51.204737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:04.212 [2024-07-26 12:21:51.204749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:04.212 [2024-07-26 12:21:51.204760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:04.212 [2024-07-26 12:21:51.204772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:04.212 [2024-07-26 12:21:51.204783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:04.212 [2024-07-26 12:21:51.204795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:04.212 [2024-07-26 12:21:51.204806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:04.212 [2024-07-26 12:21:51.204818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:04.212 [2024-07-26 12:21:51.204829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:04.212 [2024-07-26 12:21:51.204862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:04.212 [2024-07-26 12:21:51.204873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:04.212 [2024-07-26 12:21:51.204884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:04.212 [2024-07-26 12:21:51.204896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:04.212 [2024-07-26 12:21:51.204907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:04.212 [2024-07-26 12:21:51.204922] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:04.212 [2024-07-26 12:21:51.204933] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 1b0154a1-58e8-4c6b-b408-534781e5ff98 00:28:04.212 [2024-07-26 12:21:51.204944] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:04.212 [2024-07-26 12:21:51.204954] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:04.212 [2024-07-26 12:21:51.204970] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:04.212 [2024-07-26 12:21:51.204982] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:04.212 [2024-07-26 12:21:51.204992] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:04.212 [2024-07-26 12:21:51.205004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:04.212 [2024-07-26 12:21:51.205014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:04.212 [2024-07-26 12:21:51.205024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:04.212 [2024-07-26 12:21:51.205035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:04.212 [2024-07-26 12:21:51.205046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.212 [2024-07-26 12:21:51.205056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:04.212 [2024-07-26 12:21:51.205068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.378 ms 00:28:04.212 [2024-07-26 12:21:51.205079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.228199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.212 [2024-07-26 12:21:51.228292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:04.212 [2024-07-26 12:21:51.228309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.082 ms 00:28:04.212 [2024-07-26 12:21:51.228319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.228946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.212 [2024-07-26 12:21:51.228963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:04.212 [2024-07-26 12:21:51.228974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.582 ms 00:28:04.212 [2024-07-26 12:21:51.228985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.297635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:04.212 [2024-07-26 12:21:51.297721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:04.212 [2024-07-26 12:21:51.297737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:04.212 [2024-07-26 12:21:51.297749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.297808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:04.212 [2024-07-26 12:21:51.297820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:04.212 [2024-07-26 12:21:51.297831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:04.212 [2024-07-26 12:21:51.297847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.297955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:04.212 [2024-07-26 12:21:51.297981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:04.212 [2024-07-26 12:21:51.297992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:04.212 [2024-07-26 12:21:51.298003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.298023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:04.212 [2024-07-26 12:21:51.298034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:04.212 [2024-07-26 12:21:51.298045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:04.212 [2024-07-26 12:21:51.298055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.424387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:04.212 [2024-07-26 12:21:51.424461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:04.212 [2024-07-26 12:21:51.424477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:04.212 [2024-07-26 12:21:51.424487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.530613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:04.212 [2024-07-26 12:21:51.530684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:04.212 [2024-07-26 12:21:51.530698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:04.212 [2024-07-26 12:21:51.530709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.212 [2024-07-26 12:21:51.530819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:04.212 [2024-07-26 12:21:51.530832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:04.213 [2024-07-26 12:21:51.530853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:04.213 [2024-07-26 12:21:51.530864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.213 [2024-07-26 12:21:51.530915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:04.213 [2024-07-26 12:21:51.530926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:04.213 [2024-07-26 12:21:51.530936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:04.213 [2024-07-26 12:21:51.530946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.213 [2024-07-26 12:21:51.531054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:04.213 [2024-07-26 12:21:51.531066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:04.213 [2024-07-26 12:21:51.531077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:04.213 [2024-07-26 12:21:51.531092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.213 [2024-07-26 12:21:51.531156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:04.213 [2024-07-26 12:21:51.531169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:04.213 [2024-07-26 12:21:51.531179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:04.213 [2024-07-26 12:21:51.531189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.213 [2024-07-26 12:21:51.531228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:04.213 [2024-07-26 12:21:51.531239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:04.213 [2024-07-26 12:21:51.531249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:04.213 [2024-07-26 12:21:51.531263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.213 [2024-07-26 12:21:51.531348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:04.213 [2024-07-26 12:21:51.531363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:04.213 [2024-07-26 12:21:51.531375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:04.213 [2024-07-26 12:21:51.531385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.213 [2024-07-26 12:21:51.531540] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7868.558 ms, result 0 00:28:07.498 12:21:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:07.498 12:21:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:07.498 12:21:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:07.498 12:21:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:07.498 12:21:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:07.498 12:21:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84812 00:28:07.498 12:21:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:07.498 12:21:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:07.498 12:21:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84812 00:28:07.499 12:21:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 84812 ']' 00:28:07.499 12:21:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.499 12:21:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:07.499 12:21:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.499 12:21:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:07.499 12:21:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:07.499 [2024-07-26 12:21:54.965803] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:28:07.499 [2024-07-26 12:21:54.965942] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84812 ] 00:28:07.499 [2024-07-26 12:21:55.138627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.499 [2024-07-26 12:21:55.367858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.877 [2024-07-26 12:21:56.447073] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:08.877 [2024-07-26 12:21:56.447186] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:08.877 [2024-07-26 12:21:56.595823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.877 [2024-07-26 12:21:56.595901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:08.877 [2024-07-26 12:21:56.595918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:08.877 [2024-07-26 12:21:56.595928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.877 [2024-07-26 12:21:56.596018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.877 [2024-07-26 12:21:56.596031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:08.878 [2024-07-26 12:21:56.596042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:28:08.878 [2024-07-26 12:21:56.596052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.878 [2024-07-26 12:21:56.596082] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:08.878 [2024-07-26 12:21:56.597704] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:08.878 [2024-07-26 12:21:56.597878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.878 [2024-07-26 12:21:56.597896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:08.878 [2024-07-26 12:21:56.597910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.805 ms 00:28:08.878 [2024-07-26 12:21:56.597926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.878 [2024-07-26 12:21:56.599625] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:08.878 [2024-07-26 12:21:56.621407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.878 [2024-07-26 12:21:56.621485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:08.878 [2024-07-26 12:21:56.621503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.815 ms 00:28:08.878 [2024-07-26 12:21:56.621515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.878 [2024-07-26 12:21:56.621651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.878 [2024-07-26 12:21:56.621666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:08.878 [2024-07-26 12:21:56.621679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:28:08.878 [2024-07-26 12:21:56.621689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.878 [2024-07-26 12:21:56.629874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.878 [2024-07-26 12:21:56.629928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:08.878 [2024-07-26 12:21:56.629943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.063 ms 00:28:08.878 [2024-07-26 12:21:56.629954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.878 [2024-07-26 12:21:56.630044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.878 [2024-07-26 12:21:56.630062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:08.878 [2024-07-26 12:21:56.630079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:28:08.878 [2024-07-26 12:21:56.630090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.878 [2024-07-26 12:21:56.630181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.878 [2024-07-26 12:21:56.630195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:08.878 [2024-07-26 12:21:56.630206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:08.878 [2024-07-26 12:21:56.630217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.878 [2024-07-26 12:21:56.630249] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:08.878 [2024-07-26 12:21:56.636135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.878 [2024-07-26 12:21:56.636193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:08.878 [2024-07-26 12:21:56.636208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.902 ms 00:28:08.878 [2024-07-26 12:21:56.636218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.878 [2024-07-26 12:21:56.636267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.878 [2024-07-26 12:21:56.636278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:08.878 [2024-07-26 12:21:56.636294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:08.878 [2024-07-26 12:21:56.636304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.878 [2024-07-26 12:21:56.636403] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:08.878 [2024-07-26 12:21:56.636430] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:08.878 [2024-07-26 12:21:56.636468] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:08.878 [2024-07-26 12:21:56.636487] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:28:08.878 [2024-07-26 12:21:56.636580] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:08.878 [2024-07-26 12:21:56.636597] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:08.878 [2024-07-26 12:21:56.636611] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:28:08.878 [2024-07-26 12:21:56.636625] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:08.878 [2024-07-26 12:21:56.636638] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:08.878 [2024-07-26 12:21:56.636650] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:08.878 [2024-07-26 12:21:56.636660] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:08.878 [2024-07-26 12:21:56.636671] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:08.878 [2024-07-26 12:21:56.636682] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:08.878 [2024-07-26 12:21:56.636692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.878 [2024-07-26 12:21:56.636702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:08.878 [2024-07-26 12:21:56.636713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.294 ms 00:28:08.878 [2024-07-26 12:21:56.636726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.878 [2024-07-26 12:21:56.636804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.878 [2024-07-26 12:21:56.636821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:08.878 [2024-07-26 12:21:56.636832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:28:08.878 [2024-07-26 12:21:56.636843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.878 [2024-07-26 12:21:56.636937] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:08.878 [2024-07-26 12:21:56.636950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:08.878 [2024-07-26 12:21:56.636961] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:08.878 [2024-07-26 12:21:56.636972] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:08.878 [2024-07-26 12:21:56.636987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:08.878 [2024-07-26 12:21:56.636997] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:08.878 [2024-07-26 12:21:56.637007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:08.878 [2024-07-26 12:21:56.637017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:08.878 [2024-07-26 12:21:56.637029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:08.878 [2024-07-26 12:21:56.637038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:08.878 [2024-07-26 12:21:56.637049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:08.878 [2024-07-26 12:21:56.637058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:08.878 [2024-07-26 12:21:56.637068] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:08.878 [2024-07-26 12:21:56.637078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:08.878 [2024-07-26 12:21:56.637088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:08.878 [2024-07-26 12:21:56.637097] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:08.878 [2024-07-26 12:21:56.637107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:08.878 [2024-07-26 12:21:56.637117] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:08.878 [2024-07-26 12:21:56.637149] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:08.878 [2024-07-26 12:21:56.637158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:08.878 [2024-07-26 12:21:56.637168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:08.878 [2024-07-26 12:21:56.637177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:08.878 [2024-07-26 12:21:56.637187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:08.878 [2024-07-26 12:21:56.637197] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:08.878 [2024-07-26 12:21:56.637207] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:08.878 [2024-07-26 12:21:56.637216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:08.878 [2024-07-26 12:21:56.637226] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:08.878 [2024-07-26 12:21:56.637236] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:08.878 [2024-07-26 12:21:56.637245] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:08.878 [2024-07-26 12:21:56.637255] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:08.878 [2024-07-26 12:21:56.637265] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:08.878 [2024-07-26 12:21:56.637274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:08.878 [2024-07-26 12:21:56.637284] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:08.878 [2024-07-26 12:21:56.637294] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:08.878 [2024-07-26 12:21:56.637304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:08.878 [2024-07-26 12:21:56.637313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:08.878 [2024-07-26 12:21:56.637322] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:08.878 [2024-07-26 12:21:56.637332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:08.878 [2024-07-26 12:21:56.637342] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:08.878 [2024-07-26 12:21:56.637351] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:08.878 [2024-07-26 12:21:56.637367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:08.878 [2024-07-26 12:21:56.637376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:08.878 [2024-07-26 12:21:56.637387] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:08.878 [2024-07-26 12:21:56.637396] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:08.878 [2024-07-26 12:21:56.637406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:08.878 [2024-07-26 12:21:56.637417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:08.879 [2024-07-26 12:21:56.637428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:08.879 [2024-07-26 12:21:56.637439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:08.879 [2024-07-26 12:21:56.637449] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:08.879 [2024-07-26 12:21:56.637458] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:08.879 [2024-07-26 12:21:56.637468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:08.879 [2024-07-26 12:21:56.637492] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:08.879 [2024-07-26 12:21:56.637502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:08.879 [2024-07-26 12:21:56.637513] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:08.879 [2024-07-26 12:21:56.637526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:08.879 [2024-07-26 12:21:56.637538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:08.879 [2024-07-26 12:21:56.637549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:08.879 [2024-07-26 12:21:56.637560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:08.879 [2024-07-26 12:21:56.637571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:08.879 [2024-07-26 12:21:56.637582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:08.879 [2024-07-26 12:21:56.637621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:08.879 [2024-07-26 12:21:56.637632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:08.879 [2024-07-26 12:21:56.637643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:08.879 [2024-07-26 12:21:56.637654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:08.879 [2024-07-26 12:21:56.637666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:08.879 [2024-07-26 12:21:56.637677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:08.879 [2024-07-26 12:21:56.637688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:08.879 [2024-07-26 12:21:56.637699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:08.879 [2024-07-26 12:21:56.637710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:08.879 [2024-07-26 12:21:56.637721] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:08.879 [2024-07-26 12:21:56.637734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:08.879 [2024-07-26 12:21:56.637745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:08.879 [2024-07-26 12:21:56.637757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:08.879 [2024-07-26 12:21:56.637767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:08.879 [2024-07-26 12:21:56.637781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:08.879 [2024-07-26 12:21:56.637793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:08.879 [2024-07-26 12:21:56.637804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:08.879 [2024-07-26 12:21:56.637815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.913 ms 00:28:08.879 [2024-07-26 12:21:56.637830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:08.879 [2024-07-26 12:21:56.637894] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:08.879 [2024-07-26 12:21:56.637912] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:12.164 [2024-07-26 12:21:59.485493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.164 [2024-07-26 12:21:59.485560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:12.164 [2024-07-26 12:21:59.485578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2852.220 ms 00:28:12.164 [2024-07-26 12:21:59.485607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.164 [2024-07-26 12:21:59.526449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.164 [2024-07-26 12:21:59.526509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:12.164 [2024-07-26 12:21:59.526525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.614 ms 00:28:12.164 [2024-07-26 12:21:59.526536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.164 [2024-07-26 12:21:59.526657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.526670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:12.165 [2024-07-26 12:21:59.526682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:12.165 [2024-07-26 12:21:59.526692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.573467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.573522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:12.165 [2024-07-26 12:21:59.573538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.784 ms 00:28:12.165 [2024-07-26 12:21:59.573548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.573623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.573635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:12.165 [2024-07-26 12:21:59.573646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:12.165 [2024-07-26 12:21:59.573655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.574159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.574174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:12.165 [2024-07-26 12:21:59.574185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.435 ms 00:28:12.165 [2024-07-26 12:21:59.574195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.574240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.574251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:12.165 [2024-07-26 12:21:59.574262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:12.165 [2024-07-26 12:21:59.574271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.595225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.595278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:12.165 [2024-07-26 12:21:59.595293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.963 ms 00:28:12.165 [2024-07-26 12:21:59.595304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.614357] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:12.165 [2024-07-26 12:21:59.614405] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:12.165 [2024-07-26 12:21:59.614422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.614433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:12.165 [2024-07-26 12:21:59.614446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.004 ms 00:28:12.165 [2024-07-26 12:21:59.614455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.635029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.635077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:12.165 [2024-07-26 12:21:59.635093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.553 ms 00:28:12.165 [2024-07-26 12:21:59.635103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.654869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.654916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:12.165 [2024-07-26 12:21:59.654930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.719 ms 00:28:12.165 [2024-07-26 12:21:59.654941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.673495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.673544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:12.165 [2024-07-26 12:21:59.673559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.532 ms 00:28:12.165 [2024-07-26 12:21:59.673569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.674428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.674454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:12.165 [2024-07-26 12:21:59.674470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.727 ms 00:28:12.165 [2024-07-26 12:21:59.674480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.777865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.777934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:12.165 [2024-07-26 12:21:59.777951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 103.523 ms 00:28:12.165 [2024-07-26 12:21:59.777962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.791863] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:12.165 [2024-07-26 12:21:59.793037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.793067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:12.165 [2024-07-26 12:21:59.793089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.024 ms 00:28:12.165 [2024-07-26 12:21:59.793100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.793247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.793263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:12.165 [2024-07-26 12:21:59.793274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:12.165 [2024-07-26 12:21:59.793285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.793355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.793373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:12.165 [2024-07-26 12:21:59.793384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:12.165 [2024-07-26 12:21:59.793398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.793423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.793434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:12.165 [2024-07-26 12:21:59.793446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:12.165 [2024-07-26 12:21:59.793456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.793495] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:12.165 [2024-07-26 12:21:59.793508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.793518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:12.165 [2024-07-26 12:21:59.793529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:12.165 [2024-07-26 12:21:59.793540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.835804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.835882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:12.165 [2024-07-26 12:21:59.835900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.303 ms 00:28:12.165 [2024-07-26 12:21:59.835911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.836025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:21:59.836039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:12.165 [2024-07-26 12:21:59.836051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:28:12.165 [2024-07-26 12:21:59.836072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:21:59.837442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3246.360 ms, result 0 00:28:12.165 [2024-07-26 12:21:59.852263] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.165 [2024-07-26 12:21:59.868293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:12.165 [2024-07-26 12:21:59.878877] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:12.165 12:21:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.165 12:21:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:12.165 12:21:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:12.165 12:21:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:12.165 12:21:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:12.165 [2024-07-26 12:22:00.098504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:22:00.098574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:12.165 [2024-07-26 12:22:00.098590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:12.165 [2024-07-26 12:22:00.098601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:22:00.098630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:22:00.098642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:12.165 [2024-07-26 12:22:00.098652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:12.165 [2024-07-26 12:22:00.098662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:22:00.098683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.165 [2024-07-26 12:22:00.098694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:12.165 [2024-07-26 12:22:00.098705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:12.165 [2024-07-26 12:22:00.098718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.165 [2024-07-26 12:22:00.098777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.269 ms, result 0 00:28:12.165 true 00:28:12.165 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:12.424 { 00:28:12.424 "name": "ftl", 00:28:12.424 "properties": [ 00:28:12.424 { 00:28:12.424 "name": "superblock_version", 00:28:12.424 "value": 5, 00:28:12.424 "read-only": true 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "name": "base_device", 00:28:12.424 "bands": [ 00:28:12.424 { 00:28:12.424 "id": 0, 00:28:12.424 "state": "CLOSED", 00:28:12.424 "validity": 1.0 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 1, 00:28:12.424 "state": "CLOSED", 00:28:12.424 "validity": 1.0 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 2, 00:28:12.424 "state": "CLOSED", 00:28:12.424 "validity": 0.007843137254901933 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 3, 00:28:12.424 "state": "FREE", 00:28:12.424 "validity": 0.0 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 4, 00:28:12.424 "state": "FREE", 00:28:12.424 "validity": 0.0 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 5, 00:28:12.424 "state": "FREE", 00:28:12.424 "validity": 0.0 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 6, 00:28:12.424 "state": "FREE", 00:28:12.424 "validity": 0.0 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 7, 00:28:12.424 "state": "FREE", 00:28:12.424 "validity": 0.0 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 8, 00:28:12.424 "state": "FREE", 00:28:12.424 "validity": 0.0 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 9, 00:28:12.424 "state": "FREE", 00:28:12.424 "validity": 0.0 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 10, 00:28:12.424 "state": "FREE", 00:28:12.424 "validity": 0.0 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 11, 00:28:12.424 "state": "FREE", 00:28:12.424 "validity": 0.0 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 12, 00:28:12.424 "state": "FREE", 00:28:12.424 "validity": 0.0 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 13, 00:28:12.424 "state": "FREE", 00:28:12.424 "validity": 0.0 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 14, 00:28:12.424 "state": "FREE", 00:28:12.424 "validity": 0.0 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 15, 00:28:12.424 "state": "FREE", 00:28:12.424 "validity": 0.0 00:28:12.424 }, 00:28:12.424 { 00:28:12.424 "id": 16, 00:28:12.424 "state": "FREE", 00:28:12.424 "validity": 0.0 00:28:12.425 }, 00:28:12.425 { 00:28:12.425 "id": 17, 00:28:12.425 "state": "FREE", 00:28:12.425 "validity": 0.0 00:28:12.425 } 00:28:12.425 ], 00:28:12.425 "read-only": true 00:28:12.425 }, 00:28:12.425 { 00:28:12.425 "name": "cache_device", 00:28:12.425 "type": "bdev", 00:28:12.425 "chunks": [ 00:28:12.425 { 00:28:12.425 "id": 0, 00:28:12.425 "state": "INACTIVE", 00:28:12.425 "utilization": 0.0 00:28:12.425 }, 00:28:12.425 { 00:28:12.425 "id": 1, 00:28:12.425 "state": "OPEN", 00:28:12.425 "utilization": 0.0 00:28:12.425 }, 00:28:12.425 { 00:28:12.425 "id": 2, 00:28:12.425 "state": "OPEN", 00:28:12.425 "utilization": 0.0 00:28:12.425 }, 00:28:12.425 { 00:28:12.425 "id": 3, 00:28:12.425 "state": "FREE", 00:28:12.425 "utilization": 0.0 00:28:12.425 }, 00:28:12.425 { 00:28:12.425 "id": 4, 00:28:12.425 "state": "FREE", 00:28:12.425 "utilization": 0.0 00:28:12.425 } 00:28:12.425 ], 00:28:12.425 "read-only": true 00:28:12.425 }, 00:28:12.425 { 00:28:12.425 "name": "verbose_mode", 00:28:12.425 "value": true, 00:28:12.425 "unit": "", 00:28:12.425 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:12.425 }, 00:28:12.425 { 00:28:12.425 "name": "prep_upgrade_on_shutdown", 00:28:12.425 "value": false, 00:28:12.425 "unit": "", 00:28:12.425 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:12.425 } 00:28:12.425 ] 00:28:12.425 } 00:28:12.425 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:12.425 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:12.425 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:12.683 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:12.684 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:12.684 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:12.684 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:12.684 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:12.942 Validate MD5 checksum, iteration 1 00:28:12.942 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:12.942 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:12.942 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:12.942 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:12.942 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:12.942 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:12.942 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:12.942 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:12.942 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:12.942 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:12.942 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:12.942 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:12.942 12:22:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:13.200 [2024-07-26 12:22:00.923869] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:28:13.200 [2024-07-26 12:22:00.924211] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84888 ] 00:28:13.200 [2024-07-26 12:22:01.086708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.458 [2024-07-26 12:22:01.328540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.843  Copying: 652/1024 [MB] (652 MBps) Copying: 1024/1024 [MB] (average 653 MBps) 00:28:17.843 00:28:17.843 12:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:17.843 12:22:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:19.217 12:22:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:19.217 12:22:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c15b0ecffaf95b1a163104704268d36d 00:28:19.217 12:22:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c15b0ecffaf95b1a163104704268d36d != \c\1\5\b\0\e\c\f\f\a\f\9\5\b\1\a\1\6\3\1\0\4\7\0\4\2\6\8\d\3\6\d ]] 00:28:19.217 Validate MD5 checksum, iteration 2 00:28:19.217 12:22:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:19.217 12:22:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:19.217 12:22:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:19.217 12:22:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:19.217 12:22:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:19.217 12:22:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:19.217 12:22:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:19.217 12:22:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:19.217 12:22:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:19.474 [2024-07-26 12:22:07.212811] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:28:19.474 [2024-07-26 12:22:07.213257] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84955 ] 00:28:19.474 [2024-07-26 12:22:07.386129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.731 [2024-07-26 12:22:07.627801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.480  Copying: 639/1024 [MB] (639 MBps) Copying: 1024/1024 [MB] (average 639 MBps) 00:28:25.480 00:28:25.480 12:22:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:25.480 12:22:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=463b9dcf4a4d5c57a5dd524cfb4053c1 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 463b9dcf4a4d5c57a5dd524cfb4053c1 != \4\6\3\b\9\d\c\f\4\a\4\d\5\c\5\7\a\5\d\d\5\2\4\c\f\b\4\0\5\3\c\1 ]] 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84812 ]] 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84812 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85033 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:26.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85033 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85033 ']' 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:26.854 12:22:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:27.113 [2024-07-26 12:22:14.858807] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:28:27.113 [2024-07-26 12:22:14.858938] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85033 ] 00:28:27.113 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 84812 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:27.113 [2024-07-26 12:22:15.032817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.371 [2024-07-26 12:22:15.269576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.748 [2024-07-26 12:22:16.291770] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:28.748 [2024-07-26 12:22:16.291835] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:28.748 [2024-07-26 12:22:16.438612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.748 [2024-07-26 12:22:16.438673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:28.748 [2024-07-26 12:22:16.438689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:28.748 [2024-07-26 12:22:16.438700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.749 [2024-07-26 12:22:16.438756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.749 [2024-07-26 12:22:16.438768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:28.749 [2024-07-26 12:22:16.438778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:28.749 [2024-07-26 12:22:16.438787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.749 [2024-07-26 12:22:16.438814] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:28.749 [2024-07-26 12:22:16.439882] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:28.749 [2024-07-26 12:22:16.439911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.749 [2024-07-26 12:22:16.439922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:28.749 [2024-07-26 12:22:16.439933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.106 ms 00:28:28.749 [2024-07-26 12:22:16.439946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.749 [2024-07-26 12:22:16.440321] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:28.749 [2024-07-26 12:22:16.464834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.749 [2024-07-26 12:22:16.464881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:28.749 [2024-07-26 12:22:16.464902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.553 ms 00:28:28.749 [2024-07-26 12:22:16.464913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.749 [2024-07-26 12:22:16.480030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.749 [2024-07-26 12:22:16.480071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:28.749 [2024-07-26 12:22:16.480085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:28.749 [2024-07-26 12:22:16.480095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.749 [2024-07-26 12:22:16.480615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.749 [2024-07-26 12:22:16.480637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:28.749 [2024-07-26 12:22:16.480649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.413 ms 00:28:28.749 [2024-07-26 12:22:16.480659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.749 [2024-07-26 12:22:16.480719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.749 [2024-07-26 12:22:16.480732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:28.749 [2024-07-26 12:22:16.480743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:28:28.749 [2024-07-26 12:22:16.480753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.749 [2024-07-26 12:22:16.480785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.749 [2024-07-26 12:22:16.480797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:28.749 [2024-07-26 12:22:16.480809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:28.749 [2024-07-26 12:22:16.480819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.749 [2024-07-26 12:22:16.480844] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:28.749 [2024-07-26 12:22:16.486080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.749 [2024-07-26 12:22:16.486114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:28.749 [2024-07-26 12:22:16.486155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.251 ms 00:28:28.749 [2024-07-26 12:22:16.486177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.749 [2024-07-26 12:22:16.486214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.749 [2024-07-26 12:22:16.486226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:28.749 [2024-07-26 12:22:16.486236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:28.749 [2024-07-26 12:22:16.486247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.749 [2024-07-26 12:22:16.486285] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:28.749 [2024-07-26 12:22:16.486309] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:28.749 [2024-07-26 12:22:16.486346] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:28.749 [2024-07-26 12:22:16.486363] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:28:28.749 [2024-07-26 12:22:16.486445] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:28.749 [2024-07-26 12:22:16.486458] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:28.749 [2024-07-26 12:22:16.486471] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:28:28.749 [2024-07-26 12:22:16.486483] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:28.749 [2024-07-26 12:22:16.486495] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:28.749 [2024-07-26 12:22:16.486505] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:28.749 [2024-07-26 12:22:16.486518] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:28.749 [2024-07-26 12:22:16.486527] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:28.749 [2024-07-26 12:22:16.486537] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:28.749 [2024-07-26 12:22:16.486547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.749 [2024-07-26 12:22:16.486561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:28.749 [2024-07-26 12:22:16.486571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.264 ms 00:28:28.749 [2024-07-26 12:22:16.486580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.749 [2024-07-26 12:22:16.486648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.749 [2024-07-26 12:22:16.486659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:28.749 [2024-07-26 12:22:16.486668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:28:28.749 [2024-07-26 12:22:16.486680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.749 [2024-07-26 12:22:16.486765] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:28.749 [2024-07-26 12:22:16.486777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:28.749 [2024-07-26 12:22:16.486787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:28.749 [2024-07-26 12:22:16.486798] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.749 [2024-07-26 12:22:16.486808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:28.749 [2024-07-26 12:22:16.486817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:28.749 [2024-07-26 12:22:16.486827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:28.749 [2024-07-26 12:22:16.486836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:28.749 [2024-07-26 12:22:16.486846] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:28.749 [2024-07-26 12:22:16.486856] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.749 [2024-07-26 12:22:16.486865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:28.749 [2024-07-26 12:22:16.486874] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:28.749 [2024-07-26 12:22:16.486883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.749 [2024-07-26 12:22:16.486893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:28.749 [2024-07-26 12:22:16.486902] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:28.749 [2024-07-26 12:22:16.486911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.749 [2024-07-26 12:22:16.486920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:28.749 [2024-07-26 12:22:16.486929] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:28.749 [2024-07-26 12:22:16.486938] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.749 [2024-07-26 12:22:16.486947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:28.749 [2024-07-26 12:22:16.486956] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:28.749 [2024-07-26 12:22:16.486965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:28.749 [2024-07-26 12:22:16.486974] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:28.749 [2024-07-26 12:22:16.486983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:28.749 [2024-07-26 12:22:16.486992] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:28.749 [2024-07-26 12:22:16.487001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:28.749 [2024-07-26 12:22:16.487010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:28.749 [2024-07-26 12:22:16.487018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:28.749 [2024-07-26 12:22:16.487027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:28.749 [2024-07-26 12:22:16.487036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:28.749 [2024-07-26 12:22:16.487045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:28.749 [2024-07-26 12:22:16.487054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:28.749 [2024-07-26 12:22:16.487062] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:28.749 [2024-07-26 12:22:16.487071] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.749 [2024-07-26 12:22:16.487080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:28.749 [2024-07-26 12:22:16.487088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:28.749 [2024-07-26 12:22:16.487097] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.749 [2024-07-26 12:22:16.487106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:28.749 [2024-07-26 12:22:16.487114] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:28.749 [2024-07-26 12:22:16.487135] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.749 [2024-07-26 12:22:16.487145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:28.750 [2024-07-26 12:22:16.487155] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:28.750 [2024-07-26 12:22:16.487164] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.750 [2024-07-26 12:22:16.487172] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:28.750 [2024-07-26 12:22:16.487182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:28.750 [2024-07-26 12:22:16.487192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:28.750 [2024-07-26 12:22:16.487202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:28.750 [2024-07-26 12:22:16.487212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:28.750 [2024-07-26 12:22:16.487221] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:28.750 [2024-07-26 12:22:16.487241] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:28.750 [2024-07-26 12:22:16.487250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:28.750 [2024-07-26 12:22:16.487259] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:28.750 [2024-07-26 12:22:16.487269] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:28.750 [2024-07-26 12:22:16.487280] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:28.750 [2024-07-26 12:22:16.487295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:28.750 [2024-07-26 12:22:16.487307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:28.750 [2024-07-26 12:22:16.487317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:28.750 [2024-07-26 12:22:16.487328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:28.750 [2024-07-26 12:22:16.487338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:28.750 [2024-07-26 12:22:16.487348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:28.750 [2024-07-26 12:22:16.487358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:28.750 [2024-07-26 12:22:16.487368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:28.750 [2024-07-26 12:22:16.487378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:28.750 [2024-07-26 12:22:16.487388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:28.750 [2024-07-26 12:22:16.487398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:28.750 [2024-07-26 12:22:16.487407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:28.750 [2024-07-26 12:22:16.487417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:28.750 [2024-07-26 12:22:16.487427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:28.750 [2024-07-26 12:22:16.487437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:28.750 [2024-07-26 12:22:16.487447] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:28.750 [2024-07-26 12:22:16.487457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:28.750 [2024-07-26 12:22:16.487468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:28.750 [2024-07-26 12:22:16.487478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:28.750 [2024-07-26 12:22:16.487488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:28.750 [2024-07-26 12:22:16.487499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:28.750 [2024-07-26 12:22:16.487509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.750 [2024-07-26 12:22:16.487519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:28.750 [2024-07-26 12:22:16.487529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.795 ms 00:28:28.750 [2024-07-26 12:22:16.487538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.750 [2024-07-26 12:22:16.530349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.750 [2024-07-26 12:22:16.530404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:28.750 [2024-07-26 12:22:16.530420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.821 ms 00:28:28.750 [2024-07-26 12:22:16.530431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.750 [2024-07-26 12:22:16.530490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.750 [2024-07-26 12:22:16.530502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:28.750 [2024-07-26 12:22:16.530514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:28.750 [2024-07-26 12:22:16.530530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.750 [2024-07-26 12:22:16.579336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.750 [2024-07-26 12:22:16.579386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:28.750 [2024-07-26 12:22:16.579401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.782 ms 00:28:28.750 [2024-07-26 12:22:16.579411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.750 [2024-07-26 12:22:16.579472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.750 [2024-07-26 12:22:16.579484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:28.750 [2024-07-26 12:22:16.579495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:28.750 [2024-07-26 12:22:16.579505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.750 [2024-07-26 12:22:16.579634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.750 [2024-07-26 12:22:16.579648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:28.750 [2024-07-26 12:22:16.579659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:28:28.750 [2024-07-26 12:22:16.579669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.750 [2024-07-26 12:22:16.579708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.750 [2024-07-26 12:22:16.579722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:28.750 [2024-07-26 12:22:16.579732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:28.750 [2024-07-26 12:22:16.579741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.750 [2024-07-26 12:22:16.602794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.750 [2024-07-26 12:22:16.602847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:28.750 [2024-07-26 12:22:16.602872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.067 ms 00:28:28.750 [2024-07-26 12:22:16.602882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.750 [2024-07-26 12:22:16.603022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.750 [2024-07-26 12:22:16.603035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:28.750 [2024-07-26 12:22:16.603046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:28.750 [2024-07-26 12:22:16.603056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.750 [2024-07-26 12:22:16.646283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.750 [2024-07-26 12:22:16.646348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:28.750 [2024-07-26 12:22:16.646365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.273 ms 00:28:28.750 [2024-07-26 12:22:16.646376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:28.750 [2024-07-26 12:22:16.662252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:28.750 [2024-07-26 12:22:16.662298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:28.750 [2024-07-26 12:22:16.662312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.687 ms 00:28:28.750 [2024-07-26 12:22:16.662322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.032 [2024-07-26 12:22:16.752036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.032 [2024-07-26 12:22:16.752109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:29.032 [2024-07-26 12:22:16.752142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 89.745 ms 00:28:29.032 [2024-07-26 12:22:16.752153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.032 [2024-07-26 12:22:16.752361] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:29.032 [2024-07-26 12:22:16.752483] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:29.032 [2024-07-26 12:22:16.752599] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:29.032 [2024-07-26 12:22:16.752712] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:29.032 [2024-07-26 12:22:16.752724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.032 [2024-07-26 12:22:16.752735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:29.032 [2024-07-26 12:22:16.752752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.480 ms 00:28:29.032 [2024-07-26 12:22:16.752761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.032 [2024-07-26 12:22:16.752852] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:29.032 [2024-07-26 12:22:16.752866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.032 [2024-07-26 12:22:16.752876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:29.033 [2024-07-26 12:22:16.752888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:29.033 [2024-07-26 12:22:16.752898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.033 [2024-07-26 12:22:16.779536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.033 [2024-07-26 12:22:16.779618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:29.033 [2024-07-26 12:22:16.779636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.654 ms 00:28:29.033 [2024-07-26 12:22:16.779646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.033 [2024-07-26 12:22:16.795284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.033 [2024-07-26 12:22:16.795347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:29.033 [2024-07-26 12:22:16.795362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:28:29.033 [2024-07-26 12:22:16.795377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.033 [2024-07-26 12:22:16.795648] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:29.613 [2024-07-26 12:22:17.288191] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:29.614 [2024-07-26 12:22:17.288368] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:29.873 [2024-07-26 12:22:17.762470] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:29.873 [2024-07-26 12:22:17.762574] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:29.873 [2024-07-26 12:22:17.762590] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:29.873 [2024-07-26 12:22:17.762606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.873 [2024-07-26 12:22:17.762618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:29.873 [2024-07-26 12:22:17.762632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 968.698 ms 00:28:29.873 [2024-07-26 12:22:17.762642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.873 [2024-07-26 12:22:17.762678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.873 [2024-07-26 12:22:17.762689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:29.873 [2024-07-26 12:22:17.762700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:29.873 [2024-07-26 12:22:17.762710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.873 [2024-07-26 12:22:17.776433] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:29.873 [2024-07-26 12:22:17.776587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.873 [2024-07-26 12:22:17.776601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:29.873 [2024-07-26 12:22:17.776613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.874 ms 00:28:29.873 [2024-07-26 12:22:17.776624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.873 [2024-07-26 12:22:17.777248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.873 [2024-07-26 12:22:17.777267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:29.873 [2024-07-26 12:22:17.777279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.527 ms 00:28:29.873 [2024-07-26 12:22:17.777289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.873 [2024-07-26 12:22:17.779267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.873 [2024-07-26 12:22:17.779293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:29.873 [2024-07-26 12:22:17.779306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.958 ms 00:28:29.874 [2024-07-26 12:22:17.779315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.874 [2024-07-26 12:22:17.779356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.874 [2024-07-26 12:22:17.779367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:29.874 [2024-07-26 12:22:17.779378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:29.874 [2024-07-26 12:22:17.779388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.874 [2024-07-26 12:22:17.779495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.874 [2024-07-26 12:22:17.779511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:29.874 [2024-07-26 12:22:17.779521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:29.874 [2024-07-26 12:22:17.779531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.874 [2024-07-26 12:22:17.779553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.874 [2024-07-26 12:22:17.779564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:29.874 [2024-07-26 12:22:17.779573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:29.874 [2024-07-26 12:22:17.779583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.874 [2024-07-26 12:22:17.779612] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:29.874 [2024-07-26 12:22:17.779623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.874 [2024-07-26 12:22:17.779634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:29.874 [2024-07-26 12:22:17.779646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:29.874 [2024-07-26 12:22:17.779656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.874 [2024-07-26 12:22:17.779704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:29.874 [2024-07-26 12:22:17.779715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:29.874 [2024-07-26 12:22:17.779725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:29.874 [2024-07-26 12:22:17.779735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:29.874 [2024-07-26 12:22:17.780678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1343.800 ms, result 0 00:28:29.874 [2024-07-26 12:22:17.793007] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.874 [2024-07-26 12:22:17.809002] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:29.874 [2024-07-26 12:22:17.819282] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:30.133 Validate MD5 checksum, iteration 1 00:28:30.133 12:22:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:30.133 12:22:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:30.133 12:22:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:30.133 12:22:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:30.133 12:22:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:30.133 12:22:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:30.133 12:22:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:30.133 12:22:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:30.133 12:22:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:30.133 12:22:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:30.133 12:22:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:30.133 12:22:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:30.133 12:22:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:30.133 12:22:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:30.133 12:22:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:30.133 [2024-07-26 12:22:17.941716] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:28:30.133 [2024-07-26 12:22:17.942383] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85073 ] 00:28:30.393 [2024-07-26 12:22:18.112355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.393 [2024-07-26 12:22:18.339902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.400  Copying: 656/1024 [MB] (656 MBps) Copying: 1024/1024 [MB] (average 631 MBps) 00:28:35.400 00:28:35.400 12:22:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:35.400 12:22:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:37.374 12:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:37.374 Validate MD5 checksum, iteration 2 00:28:37.374 12:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c15b0ecffaf95b1a163104704268d36d 00:28:37.374 12:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c15b0ecffaf95b1a163104704268d36d != \c\1\5\b\0\e\c\f\f\a\f\9\5\b\1\a\1\6\3\1\0\4\7\0\4\2\6\8\d\3\6\d ]] 00:28:37.374 12:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:37.374 12:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:37.374 12:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:37.374 12:22:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:37.377 12:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:37.377 12:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:37.377 12:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:37.377 12:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:37.377 12:22:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:37.377 [2024-07-26 12:22:24.978880] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:28:37.378 [2024-07-26 12:22:24.979003] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85146 ] 00:28:37.378 [2024-07-26 12:22:25.147342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.640 [2024-07-26 12:22:25.379207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.705  Copying: 648/1024 [MB] (648 MBps) Copying: 1024/1024 [MB] (average 639 MBps) 00:28:41.705 00:28:41.705 12:22:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:41.705 12:22:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=463b9dcf4a4d5c57a5dd524cfb4053c1 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 463b9dcf4a4d5c57a5dd524cfb4053c1 != \4\6\3\b\9\d\c\f\4\a\4\d\5\c\5\7\a\5\d\d\5\2\4\c\f\b\4\0\5\3\c\1 ]] 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85033 ]] 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85033 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 85033 ']' 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 85033 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85033 00:28:43.608 killing process with pid 85033 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85033' 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 85033 00:28:43.608 12:22:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 85033 00:28:44.547 [2024-07-26 12:22:32.422968] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:44.547 [2024-07-26 12:22:32.442560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.547 [2024-07-26 12:22:32.442607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:44.547 [2024-07-26 12:22:32.442623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:44.547 [2024-07-26 12:22:32.442634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.547 [2024-07-26 12:22:32.442655] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:44.547 [2024-07-26 12:22:32.446567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.547 [2024-07-26 12:22:32.446602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:44.547 [2024-07-26 12:22:32.446615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.902 ms 00:28:44.547 [2024-07-26 12:22:32.446626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.547 [2024-07-26 12:22:32.446823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.547 [2024-07-26 12:22:32.446835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:44.547 [2024-07-26 12:22:32.446845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.173 ms 00:28:44.547 [2024-07-26 12:22:32.446855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.547 [2024-07-26 12:22:32.447955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.547 [2024-07-26 12:22:32.447987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:44.547 [2024-07-26 12:22:32.447999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.084 ms 00:28:44.547 [2024-07-26 12:22:32.448014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.547 [2024-07-26 12:22:32.448963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.547 [2024-07-26 12:22:32.448990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:44.547 [2024-07-26 12:22:32.449001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.919 ms 00:28:44.547 [2024-07-26 12:22:32.449011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.547 [2024-07-26 12:22:32.464435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.547 [2024-07-26 12:22:32.464473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:44.547 [2024-07-26 12:22:32.464492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.385 ms 00:28:44.547 [2024-07-26 12:22:32.464503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.547 [2024-07-26 12:22:32.472332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.547 [2024-07-26 12:22:32.472368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:44.547 [2024-07-26 12:22:32.472380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.806 ms 00:28:44.547 [2024-07-26 12:22:32.472391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.547 [2024-07-26 12:22:32.472485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.547 [2024-07-26 12:22:32.472504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:44.547 [2024-07-26 12:22:32.472515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:28:44.547 [2024-07-26 12:22:32.472528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.547 [2024-07-26 12:22:32.487875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.547 [2024-07-26 12:22:32.487907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:28:44.547 [2024-07-26 12:22:32.487919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.354 ms 00:28:44.547 [2024-07-26 12:22:32.487928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.547 [2024-07-26 12:22:32.503327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.547 [2024-07-26 12:22:32.503363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:28:44.547 [2024-07-26 12:22:32.503375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.390 ms 00:28:44.547 [2024-07-26 12:22:32.503385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.547 [2024-07-26 12:22:32.518099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.547 [2024-07-26 12:22:32.518148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:44.547 [2024-07-26 12:22:32.518162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.705 ms 00:28:44.547 [2024-07-26 12:22:32.518172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.806 [2024-07-26 12:22:32.532863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.806 [2024-07-26 12:22:32.532897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:44.806 [2024-07-26 12:22:32.532910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.649 ms 00:28:44.806 [2024-07-26 12:22:32.532919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.806 [2024-07-26 12:22:32.532952] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:44.806 [2024-07-26 12:22:32.532969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:44.806 [2024-07-26 12:22:32.532981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:44.806 [2024-07-26 12:22:32.532993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:44.806 [2024-07-26 12:22:32.533003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:44.806 [2024-07-26 12:22:32.533015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:44.807 [2024-07-26 12:22:32.533026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:44.807 [2024-07-26 12:22:32.533037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:44.807 [2024-07-26 12:22:32.533047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:44.807 [2024-07-26 12:22:32.533058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:44.807 [2024-07-26 12:22:32.533069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:44.807 [2024-07-26 12:22:32.533079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:44.807 [2024-07-26 12:22:32.533090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:44.807 [2024-07-26 12:22:32.533100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:44.807 [2024-07-26 12:22:32.533111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:44.807 [2024-07-26 12:22:32.533133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:44.807 [2024-07-26 12:22:32.533144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:44.807 [2024-07-26 12:22:32.533155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:44.807 [2024-07-26 12:22:32.533180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:44.807 [2024-07-26 12:22:32.533193] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:44.807 [2024-07-26 12:22:32.533204] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 1b0154a1-58e8-4c6b-b408-534781e5ff98 00:28:44.807 [2024-07-26 12:22:32.533215] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:44.807 [2024-07-26 12:22:32.533224] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:44.807 [2024-07-26 12:22:32.533234] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:44.807 [2024-07-26 12:22:32.533244] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:44.807 [2024-07-26 12:22:32.533253] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:44.807 [2024-07-26 12:22:32.533264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:44.807 [2024-07-26 12:22:32.533277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:44.807 [2024-07-26 12:22:32.533286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:44.807 [2024-07-26 12:22:32.533296] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:44.807 [2024-07-26 12:22:32.533307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.807 [2024-07-26 12:22:32.533318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:44.807 [2024-07-26 12:22:32.533328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.356 ms 00:28:44.807 [2024-07-26 12:22:32.533338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.807 [2024-07-26 12:22:32.552964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.807 [2024-07-26 12:22:32.553004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:44.807 [2024-07-26 12:22:32.553017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.638 ms 00:28:44.807 [2024-07-26 12:22:32.553034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.807 [2024-07-26 12:22:32.553530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:44.807 [2024-07-26 12:22:32.553543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:44.807 [2024-07-26 12:22:32.553553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.458 ms 00:28:44.807 [2024-07-26 12:22:32.553563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.807 [2024-07-26 12:22:32.614224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.807 [2024-07-26 12:22:32.614278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:44.807 [2024-07-26 12:22:32.614293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.807 [2024-07-26 12:22:32.614309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.807 [2024-07-26 12:22:32.614354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.807 [2024-07-26 12:22:32.614365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:44.807 [2024-07-26 12:22:32.614375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.807 [2024-07-26 12:22:32.614384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.807 [2024-07-26 12:22:32.614473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.807 [2024-07-26 12:22:32.614488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:44.807 [2024-07-26 12:22:32.614498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.807 [2024-07-26 12:22:32.614508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.807 [2024-07-26 12:22:32.614530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.807 [2024-07-26 12:22:32.614541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:44.807 [2024-07-26 12:22:32.614551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.807 [2024-07-26 12:22:32.614560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:44.807 [2024-07-26 12:22:32.732410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:44.807 [2024-07-26 12:22:32.732463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:44.807 [2024-07-26 12:22:32.732478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:44.807 [2024-07-26 12:22:32.732494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.066 [2024-07-26 12:22:32.832405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:45.066 [2024-07-26 12:22:32.832470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:45.066 [2024-07-26 12:22:32.832484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:45.066 [2024-07-26 12:22:32.832495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.067 [2024-07-26 12:22:32.832597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:45.067 [2024-07-26 12:22:32.832610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:45.067 [2024-07-26 12:22:32.832620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:45.067 [2024-07-26 12:22:32.832630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.067 [2024-07-26 12:22:32.832681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:45.067 [2024-07-26 12:22:32.832699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:45.067 [2024-07-26 12:22:32.832709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:45.067 [2024-07-26 12:22:32.832719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.067 [2024-07-26 12:22:32.832815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:45.067 [2024-07-26 12:22:32.832828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:45.067 [2024-07-26 12:22:32.832838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:45.067 [2024-07-26 12:22:32.832848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.067 [2024-07-26 12:22:32.832887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:45.067 [2024-07-26 12:22:32.832905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:45.067 [2024-07-26 12:22:32.832917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:45.067 [2024-07-26 12:22:32.832926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.067 [2024-07-26 12:22:32.832965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:45.067 [2024-07-26 12:22:32.832976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:45.067 [2024-07-26 12:22:32.832986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:45.067 [2024-07-26 12:22:32.832996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.067 [2024-07-26 12:22:32.833038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:45.067 [2024-07-26 12:22:32.833053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:45.067 [2024-07-26 12:22:32.833063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:45.067 [2024-07-26 12:22:32.833072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:45.067 [2024-07-26 12:22:32.833206] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 391.248 ms, result 0 00:28:46.445 12:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:46.445 12:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:46.445 12:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:46.445 12:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:46.445 12:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:46.445 12:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:46.445 12:22:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:46.445 12:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:46.445 Remove shared memory files 00:28:46.445 12:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:46.445 12:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:46.445 12:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84812 00:28:46.445 12:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:46.445 12:22:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:46.445 00:28:46.445 real 1m31.349s 00:28:46.445 user 2m8.244s 00:28:46.445 sys 0m22.170s 00:28:46.445 ************************************ 00:28:46.445 END TEST ftl_upgrade_shutdown 00:28:46.445 ************************************ 00:28:46.445 12:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:46.445 12:22:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:46.445 12:22:34 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:28:46.445 12:22:34 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:46.445 12:22:34 ftl -- ftl/ftl.sh@14 -- # killprocess 77999 00:28:46.445 12:22:34 ftl -- common/autotest_common.sh@950 -- # '[' -z 77999 ']' 00:28:46.446 Process with pid 77999 is not found 00:28:46.446 12:22:34 ftl -- common/autotest_common.sh@954 -- # kill -0 77999 00:28:46.446 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (77999) - No such process 00:28:46.446 12:22:34 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 77999 is not found' 00:28:46.446 12:22:34 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:46.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.446 12:22:34 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85276 00:28:46.446 12:22:34 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85276 00:28:46.446 12:22:34 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:46.446 12:22:34 ftl -- common/autotest_common.sh@831 -- # '[' -z 85276 ']' 00:28:46.446 12:22:34 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.446 12:22:34 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:46.446 12:22:34 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.446 12:22:34 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:46.446 12:22:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:46.446 [2024-07-26 12:22:34.379368] Starting SPDK v24.09-pre git sha1 1beb86cd6 / DPDK 24.03.0 initialization... 00:28:46.446 [2024-07-26 12:22:34.379504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85276 ] 00:28:46.704 [2024-07-26 12:22:34.550044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.963 [2024-07-26 12:22:34.793097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.903 12:22:35 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:47.903 12:22:35 ftl -- common/autotest_common.sh@864 -- # return 0 00:28:47.903 12:22:35 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:48.161 nvme0n1 00:28:48.161 12:22:35 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:48.161 12:22:35 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:48.161 12:22:35 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:48.420 12:22:36 ftl -- ftl/common.sh@28 -- # stores=5106f4cf-3eff-4051-9fdd-24e5d7e6abc5 00:28:48.420 12:22:36 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:48.420 12:22:36 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5106f4cf-3eff-4051-9fdd-24e5d7e6abc5 00:28:48.680 12:22:36 ftl -- ftl/ftl.sh@23 -- # killprocess 85276 00:28:48.680 12:22:36 ftl -- common/autotest_common.sh@950 -- # '[' -z 85276 ']' 00:28:48.680 12:22:36 ftl -- common/autotest_common.sh@954 -- # kill -0 85276 00:28:48.680 12:22:36 ftl -- common/autotest_common.sh@955 -- # uname 00:28:48.680 12:22:36 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:48.680 12:22:36 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85276 00:28:48.680 killing process with pid 85276 00:28:48.680 12:22:36 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:48.680 12:22:36 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:48.680 12:22:36 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85276' 00:28:48.680 12:22:36 ftl -- common/autotest_common.sh@969 -- # kill 85276 00:28:48.680 12:22:36 ftl -- common/autotest_common.sh@974 -- # wait 85276 00:28:51.214 12:22:38 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:51.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:51.473 Waiting for block devices as requested 00:28:51.473 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:51.732 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:51.732 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:51.991 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:57.262 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:57.262 Remove shared memory files 00:28:57.262 12:22:44 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:57.262 12:22:44 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:57.262 12:22:44 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:57.262 12:22:44 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:57.262 12:22:44 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:57.262 12:22:44 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:57.262 12:22:44 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:57.262 ************************************ 00:28:57.262 END TEST ftl 00:28:57.262 ************************************ 00:28:57.262 00:28:57.262 real 10m42.333s 00:28:57.262 user 13m13.997s 00:28:57.262 sys 1m24.353s 00:28:57.262 12:22:44 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:57.262 12:22:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:57.262 12:22:44 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:57.262 12:22:44 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:28:57.262 12:22:44 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:57.262 12:22:44 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:28:57.262 12:22:44 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:57.262 12:22:44 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:57.262 12:22:44 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:57.262 12:22:44 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:28:57.262 12:22:44 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:28:57.262 12:22:44 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:28:57.262 12:22:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:57.262 12:22:44 -- common/autotest_common.sh@10 -- # set +x 00:28:57.262 12:22:44 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:28:57.262 12:22:44 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:57.262 12:22:44 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:57.262 12:22:44 -- common/autotest_common.sh@10 -- # set +x 00:28:59.164 INFO: APP EXITING 00:28:59.164 INFO: killing all VMs 00:28:59.164 INFO: killing vhost app 00:28:59.164 INFO: EXIT DONE 00:28:59.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:59.987 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:59.987 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:59.987 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:59.987 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:29:00.602 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:00.860 Cleaning 00:29:00.860 Removing: /var/run/dpdk/spdk0/config 00:29:00.860 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:00.860 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:00.860 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:00.860 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:00.860 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:00.860 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:00.860 Removing: /var/run/dpdk/spdk0 00:29:00.860 Removing: /var/run/dpdk/spdk_pid61776 00:29:00.860 Removing: /var/run/dpdk/spdk_pid62014 00:29:00.860 Removing: /var/run/dpdk/spdk_pid62246 00:29:00.860 Removing: /var/run/dpdk/spdk_pid62350 00:29:00.860 Removing: /var/run/dpdk/spdk_pid62406 00:29:00.860 Removing: /var/run/dpdk/spdk_pid62540 00:29:00.860 Removing: /var/run/dpdk/spdk_pid62563 00:29:00.860 Removing: /var/run/dpdk/spdk_pid62755 00:29:00.860 Removing: /var/run/dpdk/spdk_pid62853 00:29:00.860 Removing: /var/run/dpdk/spdk_pid62953 00:29:00.860 Removing: /var/run/dpdk/spdk_pid63077 00:29:00.860 Removing: /var/run/dpdk/spdk_pid63177 00:29:00.860 Removing: /var/run/dpdk/spdk_pid63222 00:29:00.860 Removing: /var/run/dpdk/spdk_pid63261 00:29:00.860 Removing: /var/run/dpdk/spdk_pid63329 00:29:00.860 Removing: /var/run/dpdk/spdk_pid63426 00:29:00.860 Removing: /var/run/dpdk/spdk_pid63867 00:29:00.860 Removing: /var/run/dpdk/spdk_pid63942 00:29:00.860 Removing: /var/run/dpdk/spdk_pid64023 00:29:00.860 Removing: /var/run/dpdk/spdk_pid64043 00:29:00.860 Removing: /var/run/dpdk/spdk_pid64198 00:29:00.860 Removing: /var/run/dpdk/spdk_pid64214 00:29:00.860 Removing: /var/run/dpdk/spdk_pid64373 00:29:01.118 Removing: /var/run/dpdk/spdk_pid64389 00:29:01.118 Removing: /var/run/dpdk/spdk_pid64464 00:29:01.118 Removing: /var/run/dpdk/spdk_pid64487 00:29:01.118 Removing: /var/run/dpdk/spdk_pid64552 00:29:01.118 Removing: /var/run/dpdk/spdk_pid64570 00:29:01.118 Removing: /var/run/dpdk/spdk_pid64757 00:29:01.118 Removing: /var/run/dpdk/spdk_pid64799 00:29:01.118 Removing: /var/run/dpdk/spdk_pid64880 00:29:01.118 Removing: /var/run/dpdk/spdk_pid65058 00:29:01.118 Removing: /var/run/dpdk/spdk_pid65159 00:29:01.118 Removing: /var/run/dpdk/spdk_pid65205 00:29:01.118 Removing: /var/run/dpdk/spdk_pid65667 00:29:01.118 Removing: /var/run/dpdk/spdk_pid65771 00:29:01.118 Removing: /var/run/dpdk/spdk_pid65892 00:29:01.118 Removing: /var/run/dpdk/spdk_pid65946 00:29:01.118 Removing: /var/run/dpdk/spdk_pid65976 00:29:01.118 Removing: /var/run/dpdk/spdk_pid66058 00:29:01.118 Removing: /var/run/dpdk/spdk_pid66707 00:29:01.118 Removing: /var/run/dpdk/spdk_pid66757 00:29:01.118 Removing: /var/run/dpdk/spdk_pid67261 00:29:01.118 Removing: /var/run/dpdk/spdk_pid67364 00:29:01.118 Removing: /var/run/dpdk/spdk_pid67485 00:29:01.118 Removing: /var/run/dpdk/spdk_pid67543 00:29:01.118 Removing: /var/run/dpdk/spdk_pid67574 00:29:01.118 Removing: /var/run/dpdk/spdk_pid67605 00:29:01.118 Removing: /var/run/dpdk/spdk_pid69482 00:29:01.118 Removing: /var/run/dpdk/spdk_pid69636 00:29:01.118 Removing: /var/run/dpdk/spdk_pid69640 00:29:01.118 Removing: /var/run/dpdk/spdk_pid69657 00:29:01.118 Removing: /var/run/dpdk/spdk_pid69703 00:29:01.118 Removing: /var/run/dpdk/spdk_pid69707 00:29:01.118 Removing: /var/run/dpdk/spdk_pid69719 00:29:01.118 Removing: /var/run/dpdk/spdk_pid69764 00:29:01.118 Removing: /var/run/dpdk/spdk_pid69768 00:29:01.118 Removing: /var/run/dpdk/spdk_pid69780 00:29:01.118 Removing: /var/run/dpdk/spdk_pid69825 00:29:01.118 Removing: /var/run/dpdk/spdk_pid69829 00:29:01.118 Removing: /var/run/dpdk/spdk_pid69841 00:29:01.118 Removing: /var/run/dpdk/spdk_pid71210 00:29:01.118 Removing: /var/run/dpdk/spdk_pid71317 00:29:01.118 Removing: /var/run/dpdk/spdk_pid72738 00:29:01.118 Removing: /var/run/dpdk/spdk_pid74108 00:29:01.118 Removing: /var/run/dpdk/spdk_pid74224 00:29:01.118 Removing: /var/run/dpdk/spdk_pid74339 00:29:01.118 Removing: /var/run/dpdk/spdk_pid74454 00:29:01.118 Removing: /var/run/dpdk/spdk_pid74591 00:29:01.118 Removing: /var/run/dpdk/spdk_pid74676 00:29:01.118 Removing: /var/run/dpdk/spdk_pid74817 00:29:01.118 Removing: /var/run/dpdk/spdk_pid75203 00:29:01.118 Removing: /var/run/dpdk/spdk_pid75246 00:29:01.118 Removing: /var/run/dpdk/spdk_pid75699 00:29:01.118 Removing: /var/run/dpdk/spdk_pid75890 00:29:01.118 Removing: /var/run/dpdk/spdk_pid75995 00:29:01.118 Removing: /var/run/dpdk/spdk_pid76107 00:29:01.118 Removing: /var/run/dpdk/spdk_pid76174 00:29:01.118 Removing: /var/run/dpdk/spdk_pid76205 00:29:01.118 Removing: /var/run/dpdk/spdk_pid76501 00:29:01.118 Removing: /var/run/dpdk/spdk_pid76571 00:29:01.118 Removing: /var/run/dpdk/spdk_pid76661 00:29:01.377 Removing: /var/run/dpdk/spdk_pid77056 00:29:01.377 Removing: /var/run/dpdk/spdk_pid77207 00:29:01.377 Removing: /var/run/dpdk/spdk_pid77999 00:29:01.377 Removing: /var/run/dpdk/spdk_pid78140 00:29:01.377 Removing: /var/run/dpdk/spdk_pid78345 00:29:01.377 Removing: /var/run/dpdk/spdk_pid78453 00:29:01.377 Removing: /var/run/dpdk/spdk_pid78779 00:29:01.377 Removing: /var/run/dpdk/spdk_pid79033 00:29:01.377 Removing: /var/run/dpdk/spdk_pid79389 00:29:01.377 Removing: /var/run/dpdk/spdk_pid79601 00:29:01.377 Removing: /var/run/dpdk/spdk_pid79731 00:29:01.377 Removing: /var/run/dpdk/spdk_pid79800 00:29:01.377 Removing: /var/run/dpdk/spdk_pid79937 00:29:01.377 Removing: /var/run/dpdk/spdk_pid79969 00:29:01.377 Removing: /var/run/dpdk/spdk_pid80038 00:29:01.377 Removing: /var/run/dpdk/spdk_pid80233 00:29:01.377 Removing: /var/run/dpdk/spdk_pid80470 00:29:01.377 Removing: /var/run/dpdk/spdk_pid80857 00:29:01.377 Removing: /var/run/dpdk/spdk_pid81251 00:29:01.377 Removing: /var/run/dpdk/spdk_pid81655 00:29:01.377 Removing: /var/run/dpdk/spdk_pid82124 00:29:01.377 Removing: /var/run/dpdk/spdk_pid82267 00:29:01.377 Removing: /var/run/dpdk/spdk_pid82363 00:29:01.377 Removing: /var/run/dpdk/spdk_pid82937 00:29:01.377 Removing: /var/run/dpdk/spdk_pid83011 00:29:01.377 Removing: /var/run/dpdk/spdk_pid83422 00:29:01.377 Removing: /var/run/dpdk/spdk_pid83770 00:29:01.377 Removing: /var/run/dpdk/spdk_pid84222 00:29:01.377 Removing: /var/run/dpdk/spdk_pid84344 00:29:01.377 Removing: /var/run/dpdk/spdk_pid84409 00:29:01.377 Removing: /var/run/dpdk/spdk_pid84473 00:29:01.377 Removing: /var/run/dpdk/spdk_pid84531 00:29:01.377 Removing: /var/run/dpdk/spdk_pid84605 00:29:01.377 Removing: /var/run/dpdk/spdk_pid84812 00:29:01.377 Removing: /var/run/dpdk/spdk_pid84888 00:29:01.377 Removing: /var/run/dpdk/spdk_pid84955 00:29:01.377 Removing: /var/run/dpdk/spdk_pid85033 00:29:01.377 Removing: /var/run/dpdk/spdk_pid85073 00:29:01.377 Removing: /var/run/dpdk/spdk_pid85146 00:29:01.377 Removing: /var/run/dpdk/spdk_pid85276 00:29:01.377 Clean 00:29:01.377 12:22:49 -- common/autotest_common.sh@1451 -- # return 0 00:29:01.377 12:22:49 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:29:01.377 12:22:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:01.377 12:22:49 -- common/autotest_common.sh@10 -- # set +x 00:29:01.635 12:22:49 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:29:01.635 12:22:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:01.635 12:22:49 -- common/autotest_common.sh@10 -- # set +x 00:29:01.635 12:22:49 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:01.635 12:22:49 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:01.635 12:22:49 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:01.635 12:22:49 -- spdk/autotest.sh@395 -- # hash lcov 00:29:01.635 12:22:49 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:01.635 12:22:49 -- spdk/autotest.sh@397 -- # hostname 00:29:01.635 12:22:49 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:01.893 geninfo: WARNING: invalid characters removed from testname! 00:29:28.437 12:23:14 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:29.373 12:23:17 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:31.904 12:23:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:33.807 12:23:21 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:36.338 12:23:23 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:38.243 12:23:26 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:40.778 12:23:28 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:40.778 12:23:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:40.778 12:23:28 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:40.778 12:23:28 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.778 12:23:28 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.778 12:23:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.778 12:23:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.778 12:23:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.778 12:23:28 -- paths/export.sh@5 -- $ export PATH 00:29:40.778 12:23:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.778 12:23:28 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:40.778 12:23:28 -- common/autobuild_common.sh@447 -- $ date +%s 00:29:40.778 12:23:28 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721996608.XXXXXX 00:29:40.778 12:23:28 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721996608.IeBn0P 00:29:40.778 12:23:28 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:29:40.778 12:23:28 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:29:40.778 12:23:28 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:40.778 12:23:28 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:40.778 12:23:28 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:40.778 12:23:28 -- common/autobuild_common.sh@463 -- $ get_config_params 00:29:40.778 12:23:28 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:29:40.778 12:23:28 -- common/autotest_common.sh@10 -- $ set +x 00:29:40.778 12:23:28 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:29:40.778 12:23:28 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:29:40.778 12:23:28 -- pm/common@17 -- $ local monitor 00:29:40.778 12:23:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:40.778 12:23:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:40.778 12:23:28 -- pm/common@21 -- $ date +%s 00:29:40.778 12:23:28 -- pm/common@25 -- $ sleep 1 00:29:40.778 12:23:28 -- pm/common@21 -- $ date +%s 00:29:40.778 12:23:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721996608 00:29:40.778 12:23:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721996608 00:29:40.778 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721996608_collect-vmstat.pm.log 00:29:40.778 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721996608_collect-cpu-load.pm.log 00:29:41.347 12:23:29 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:29:41.347 12:23:29 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:41.347 12:23:29 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:41.347 12:23:29 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:41.347 12:23:29 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:41.347 12:23:29 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:41.347 12:23:29 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:41.347 12:23:29 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:41.347 12:23:29 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:41.606 12:23:29 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:41.606 12:23:29 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:41.606 12:23:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:41.606 12:23:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:41.606 12:23:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:41.606 12:23:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:41.606 12:23:29 -- pm/common@44 -- $ pid=86969 00:29:41.606 12:23:29 -- pm/common@50 -- $ kill -TERM 86969 00:29:41.606 12:23:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:41.606 12:23:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:41.606 12:23:29 -- pm/common@44 -- $ pid=86971 00:29:41.606 12:23:29 -- pm/common@50 -- $ kill -TERM 86971 00:29:41.606 + [[ -n 5139 ]] 00:29:41.606 + sudo kill 5139 00:29:41.615 [Pipeline] } 00:29:41.636 [Pipeline] // timeout 00:29:41.641 [Pipeline] } 00:29:41.655 [Pipeline] // stage 00:29:41.662 [Pipeline] } 00:29:41.675 [Pipeline] // catchError 00:29:41.684 [Pipeline] stage 00:29:41.687 [Pipeline] { (Stop VM) 00:29:41.699 [Pipeline] sh 00:29:41.978 + vagrant halt 00:29:45.274 ==> default: Halting domain... 00:29:51.876 [Pipeline] sh 00:29:52.156 + vagrant destroy -f 00:29:55.466 ==> default: Removing domain... 00:29:55.735 [Pipeline] sh 00:29:56.013 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:29:56.022 [Pipeline] } 00:29:56.040 [Pipeline] // stage 00:29:56.046 [Pipeline] } 00:29:56.063 [Pipeline] // dir 00:29:56.068 [Pipeline] } 00:29:56.085 [Pipeline] // wrap 00:29:56.091 [Pipeline] } 00:29:56.106 [Pipeline] // catchError 00:29:56.115 [Pipeline] stage 00:29:56.118 [Pipeline] { (Epilogue) 00:29:56.132 [Pipeline] sh 00:29:56.415 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:01.697 [Pipeline] catchError 00:30:01.699 [Pipeline] { 00:30:01.715 [Pipeline] sh 00:30:01.999 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:01.999 Artifacts sizes are good 00:30:02.010 [Pipeline] } 00:30:02.032 [Pipeline] // catchError 00:30:02.044 [Pipeline] archiveArtifacts 00:30:02.053 Archiving artifacts 00:30:02.157 [Pipeline] cleanWs 00:30:02.167 [WS-CLEANUP] Deleting project workspace... 00:30:02.167 [WS-CLEANUP] Deferred wipeout is used... 00:30:02.173 [WS-CLEANUP] done 00:30:02.174 [Pipeline] } 00:30:02.192 [Pipeline] // stage 00:30:02.197 [Pipeline] } 00:30:02.214 [Pipeline] // node 00:30:02.219 [Pipeline] End of Pipeline 00:30:02.267 Finished: SUCCESS